
How can you state as a general conclusion that "the second algorithm takes less time"?
Execution times depend on, for algorithm 1, four different input variables (m, n, x, y), and the outcome of four different iftests, so you may consider p(if1) .. p(if4) four additional input variables. Total execution time depends heavily on the resources (mainly CPU) required for evaluating the four different conditions, and for the seven different actions. The execution counts for six of the actions depend on two or three input variables.
Algorithm 2 has three input variables (m, n, x) and five tests so p(if1) .. p(if5) are input variables. Timing depend on five different evaluation times, seven different actions execution times. Here, too, execution counts for six of seven actions, and two conditions, depend on two or three input variables.
An algorithm, as such, is independent of both implementation and the context where it is applied; each step may have an arbitrary interpretation. (The pseudocode description reflects this.) So, e.g. performing another loop iteration may involve a lot more than just incrementing an integer variable and perform a conditional jump back to the top; it could take a lot work to get hold of the next element. So, the loop management itself might be the dominating cost, measured time might be practically independent on all the 'if()' and 'do something'.
Total execution times depend heavily on the relative cost of evaluating an 'if()' vs. an action ('do something'). Say, an 'if()' searching a candidate photo for similar ones in a photo archive (a very heavy test), the action simply logs the outcome (very lightweight)  then the number of tests greatly dominate the execution time, almost independent of p(if) probabilities. Or rather: If a test is supercheap, like looking up the photo's serial number in a list, but the action is like compressing the image, then test outcomes (determined by p(if)) strongly affect execution times.
All of this affects benchmark timings, but only them. No matter how dramatic the effects on the timings if you apply the same algorithm in another context  or even port the same program code to another platform with significantly different relative costs: O() is unaffected by relative costs of loop management, tests and actions.
Given these considerations, how can we make any reliable statements about algorithm complexity based on execution times measurement measured with a (usually very small) set of combinations of eight explicit input values and a bunch of hidden ones (the relative costs of test/action execution)? Can you claim that algorithm 2 is faster than algorithm 1 for any combination of input variables? Even when considering that y affects algorithm 1 but not algorithm 2? Even when algorithm 1 depends on four different p(if), algorithm 2 on five?
If you give me the freedom to set arbitrary cost values (in terms of CPU execution times) for all actions and if() tests, to freely set values for m, n, x and y, and p(if) values, I am quite certain that I could make algorithm 2 significantly slower for given values, but faster for others.
Execution times "all depends"! If you set up an O() expression in the input variables, you can, for a given set of input values, estimate (relative) expected execution time for the alternatives.
Usually, you simplify the O()expression, leaving out e.g. constant parts (like step 4 / step 2, independent of input values). If you know that e.g. if tests will always be supersimple, you may leave them out  but you should make that explicit in your presentation of the complexity. It is sort of customary to give O() only by the highest order subexpression(s) (such as m*m*x in algorithm 2, disregarding all p(if))  but as a main rule, that restricts you complexity analysis to nonextreme values of those parts you have shaven off. If you know the application area, you will usually know the expected range of variation of each value, so you e.g. chop off the x if it varies from 1 to 3 but not if it varies from zero to a billion.
In any case: Benchmarks generally covers only a tiny little selection of possible explicit inputs (in particular in an 8dimentional input space!). Usually, they are based on a large number of hidden (usually unidentified) factors affecting the results. For a specific application of the algorithm, in a specific implementation environment, and with a limited value range for all inputs, the benchmarks may give an indication of expected execution time. But they cannot provide anything comparable to an analytic derived expression for the complexity.





You're overthinking it: it's a thought experiment.
One assumes all the "if's" and "do somethings" are equivalent. It's the "complexity" (looping and branching) that one is interested in; not a bunch of "yeah buts".
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Hi all, i tried to implement the Sieve of Eratosthenes for finding all primes under a given number n.
I came up with the this solution, and i think its kind of neat. But when i run it, it is many times slower than the standard solution i found on the net afterwards.
Can someone explain to me, why my solution is slower:
public static List<Integer> sieveEratosthenes(List<Integer> numbers){
int baseNum = 0;
while(baseNum < Math.sqrt(numbers.size())){
for(int i = 1; i < numbers.size()baseNum; i++){
if(numbers.get(baseNum+i) % numbers.get(baseNum) == 0)
{
numbers.remove(baseNum+i);
}
}
baseNum++;
}
System.out.println("Calcs: "+calc);
return numbers;
}





while(baseNum < Math.sqrt(numbers.size())){
You are calculating the square root each time round the loop. Change it to:
int root = Math.sqrt(numbers.size());
while(baseNum < root){
Also, make sure your list only contains odd numbers. Or even faster use a fixed size array.





I suspect it's also going to be incorrect, since numbers.size() will potentially change on each iteration.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
 Homer





It's slow because you're using a List instead of an array.
Every time you remove an entry from the List, the remaining values in the list after the item removed have to be moved down one position. Depending on the size of the List, that can take quite a bit of time.
Use an array of flags instead of integers.





Question:
I always thought .NET Lists were implemented as linked lists. After all, what would differentiate them from arrays?
The difficult we do right away...
...the impossible takes slightly longer.





Nope. Internally, it's an array. You can see it in the reference source: Reference Source  List<t>[^]
What differs it from a normal array is just managing the array size and making it easy to add/remove items to the array. If the array gets too full, it automatically creates a new, larger array and copies the items over to it.





Member 14932873 wrote: Can someone explain to me, why my solution is slower:
Your code is slow because it seems your code combine all the worst choices.
 you are continuously resizing the list, it is very time expensive.
Use an array allocated once.
 you are checking if a number is multiple with a modulo, it is very expensive too.
When you check for a prime, start from the prime and add repeatedly the prime to scan all multiples.
Speed of operations: fast to slow
Addition, subtraction < multiplication, squaring < division, modulo, square root
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein





I'm building a language learning website. In one of the exercises the user has to guess the missing words. For example in the sentence:
What is the weather like? It is sunny today.
My developer has created an algorithm that shows the first and last words of the sentence, while removing the alternate words within it. The result is:
What ____ the ____ like? ____ is ____ today.
However, in a sentence like this:
It is not the entrance, it is the exit.
My developer says that the algorithm does not work because the word "the" is repeated.
I don't understand why the algorithm does not work when words are repeated. They still appear in alternate position, right?
Is there a solution? I could post our current algorithm if necessary.





It doesn't work because the last sample has an "even" number of words. The "last word" would have to be the last 2 words (due to "alternating"). Nothing to do with "the".
This adds the "last 2 words" if the word count is even.
string sentence = "It is not the entrance, it is the other exit.";
var words = sentence.Split( new char[] { ',', '.', ' ' }, StringSplitOptions.RemoveEmptyEntries );
List<string> selected = new List<string>();
for ( int i = 0; i < words.Length; i++ ) {
if ( ( i + 1 ) % 2 > 0 ) {
selected.Add( words[ i ] );
}
}
if ( words.Length > 1 && words.Length % 2 == 0 ) {
selected.Add( words.Last() );
}
string result = string.Join( " ", selected );
Console.WriteLine( sentence );
Console.WriteLine( result );
Console.ReadKey();
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
modified 6Sep20 12:46pm.





Gerry Schmitz wrote: the last sample has an "even" number of words Really, 9 is even?





It's either odd or even that doesn't work. It's early.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





maicart wrote: My developer says that the algorithm does not work because the word "the" is repeated. I think you should ask him to explain.





I'll talk to him tomorrow. If he does not solve the issue I will paste the algorithm in this thread in case you could help us.





I cannot think what could be difficult in a piece of code to remove alternate words from a sentence.





I suspect it is homework.
The difficult we do right away...
...the impossible takes slightly longer.





I just talked to my developer. Actually, there are 2 rules:
1st rule: In case of an odd number of words, the system will show the first and last word and will remove alternate words in the middle. Example:
A pear and an apple.
After the algorithm, the system will show:
A _____ and _____ apple.
2nd rule: In the case of an even number of words, the system will also show the first and last word and will randomly remove any of the alternate clues. Example:
How are you today?
After the algorithm, the result could be rendered either a) or b)
a) How _____ you today?
b) How are _____ today?
My developer sent me a PHP file with a clarification. Here's the link:
ClueAlgo.php  Google Drive[^]
I would appreciate your help if this is a quick fix. If it takes time this could be a paid job, for example on Fiverr.com





Sorry, I don't understand what help you are asking for. If you want it changed then tell your developer what he needs to do.





Imagine having ‘N’ rectilinear blocks of varying sizes. 'N' can be any number (< 1000); and of different but similar sizes.
See here... blocks — ImgBB[^]
I need an algorithm that will place these in a “ring” fashion, such that the area is minimized. White spaces or blank spaces within the ring are fine.
Ring picture... ring — ImgBB[^]
The constraints are…
Each rectilinear block must be placed
Minimize the area (x*y)
Create a ring such as below
Ring implementation
Note that my two pictures don't align exactly, meaning not all the blocks in the first picture are placed in the second. These pictures are only provided as reference/examples.
I’m not a computer scientist by trade. This would seem to be a cost optimization problem, by I’m having a problem wading through the many optimization algorithms out there. Any guidance on which algorithm would be viable?
Thanks!





rbuchana wrote: Create a ring such as below What exactly is a ring? Of course I looked at the example, but it could be interpreted differently.
Can the ring look like this? Or this? Maybe like this? Maybe even like this?
Are the coordinates/sizes discrete?





Door #3... image — ImgBB[^]
Difficult to explain words, but the idea is to have a rectangle in the middle with no protusions.
Coordinates/sizes are not discrete.





Cut the pieces out, move them around until optimum, then reverse engineer into a software solution.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Are you stating all of the constraints? To minimize the x*y area, it would often be preferable to have two very long sides and two very short ones for the inner rectangle.
For example, say that all rectangles are the same shape and that there are 4n of them. They could be arranged to leave a square in the middle. Let's say the size of the square is a*a and that the four outer areas are also a*a. The x*y area would then be 3a*3a=9a^2.
But instead of leaving an inner square, we can reduce two of its sides by 50% and increase the other two by 50%. Now we have an area of (7a/2)(5a/2) = (35/4)a^2, which is slightly smaller. The size of the inner rectangle is now (a/2)*(3a/2) = (3/4)a^2 instead of a^2.





You're correct. There is somewhat of a constraint on the aspect ratio (x/y), but also some flexibility. Meaning it doesn't need to be 50%, but 10% or 90% are probably too much. But no hard limit.




