
You should be the computer. Start at the beginning, keep a written note of the current values of x, i and the length of the message, and proceed to the next statement. It might seem tedious, but it will help you develop a skill that has will serve you well (as it has served me well in my 40+ years as a programmer).





Problem Statement :
Given a graph which represents a flow network where every edge has a capacity. Also given two vertices source ‘s’ and sink ‘t’ in the graph, find the maximum possible flow from s to t with following constraints :
Flow on an edge doesn’t exceed the given capacity of the edge.
Incoming flow is equal to outgoing flow for every vertex except s and t.Dinic’s algorithm for Maximum Flow  Code Companion[^]





And you have a question ?
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein





Hello to all! I need to solve the problem. An approximate representation and partial solution is already there. This controversial mathematical question can push you away, since I do not go into some important points. I just want to state the very idea. I hope someone has enough time and will interest you. In programming, we are all accustomed to using only the numerical number system. However, theoretically, there is a vector system. Why not try to implement it in such a way that the distances between the cells of information are also carriers of information? Due to this, the counting rate will be completely predetermined, and, accordingly, it will be possible to solve much more complex tasks. Below I have laid out my presentation, as an example of the simplest implementation can be represented through the creation of an emulator.
Can apply c/c++
// designation in area. Position Form.
arr1 =
Vector1 = Vector3 + Vector2, Vector4 + (Vector5), Vector10 + (Vector12), Vector16 + (Vector17) = 1;
Vector2 = (Vector3) + Vector1, Vector6 + (Vector5), Vector15 + (Vector13), Vector18 + (Vector17) = 1;
Vector3 = Vector1 + (Vector2), Vector4 + (Vector6), Vector7 + (Vector9), Vector16 + (Vector18) = 1;
Vector4 = Vector1 + Vector5, Vector3 + Vector6, Vector10 + (Vector11), Vector7 + (Vector8) = 1;
Vector5 = (Vector1) + Vector4, (Vector2) + Vector6, Vector12 + (Vector11), Vector13 + (Vector14) = 1;
Vector6 = (Vector5) + (Vector2), (Vector3) + Vector4, Vector15 + (Vector14), Vector9 + (Vector8) = 1;
Vector7 = Vector4 + Vector8, Vector3 + Vector9 = 1;
Vector8 = (Vector6) + Vector9, (Vector4) + Vector7 = 1;
Vector9 = Vector6 + Vector8, (Vector3) + Vector7 = 1;
Vector10 = Vector4 + Vector11, Vector1 + Vector12 = 1;
Vector11 = (Vector5) + Vector12, (Vector4) + Vector10 = 1;
Vector12 = Vector5 + Vector11, (Vector1) + Vector10 = 1;
Vector13 = Vector5 + Vector14, (Vector2) + Vector15 = 1;
Vector14 = (Vector6) + Vector15, (Vector5) + Vector13 = 1;
Vector15 = Vector6 + Vector14, Vector2 + Vector13 = 1;
Vector16 = Vector1 + Vector17, Vector3 + Vector18 = 1;
Vector17 = (Vector1) + Vector16, (Vector2) + Vector18 = 1;
Vector18 = Vector2 + Vector17, (Vector3) + Vector16 = 1;
// arr is an array containing the shape of the initial tetrahedron for defining the reference system in space.
// 4 conditions(directions) of expansion
Vector1 * 2, Vector4 * 2, Vector3 * 2, Vector10 * 2, Vector7 * 2, Vector16 * 2;
// if not, then
Vector1 * x^(x + 1), Vector4 * x^(x + 1), Vector3 * x^(x + 1), Vector10 * x^(x + 1), Vector7 * x^(x + 1), Vector16 * x^(x + 1);
Vector3 * (2), Vector6 * 2, Vector2 * 2, Vector9 * 2, Vector15 * 2, Vector18 * 2;
// if not, then
Vector3 * (x^(x + 1)), Vector6 * x^(x + 1), Vector2 * x^(x + 1), Vector9 * x^(x + 1), Vector15 * x^(x + 1), Vector18 * x^(x + 1);
Vector1 * (2), Vector2 * (2), Vector5 * 2, Vector12 * 2, Vector13 * 2, Vector17 * 2;
// if not, then
Vector1 * (x^(x + 1)), Vector2 * (x^(x + 1)), Vector5 * x^(x + 1), Vector12 * x^(x + 1), Vector13 * x^(x + 1), Vector17 * x^(x + 1);
Vector4 * (2), Vector5 * (2), Vector6 * (2), Vector8 * 2, Vector11 * 2, Vector14 * 2;
// if not, then
Vector4 * (x^(x + 1)), Vector5 * (x^(x + 1)), Vector6 * (x^(x + 1)), Vector8 * x^(x + 1), Vector11 * x^(x + 1), Vector14 * x^(x + 1);
x = 2^n
n = 1++
int r1 = sum(arr1; sum[arr1; g]);
// the sum of all vectors, in an arbitrary order g, in the range from 1 to 18
arr1 = sum(Vector1; Vector18);
arr2 = sum(Vector18; Vector36);
arrn = sum(18*n; n+18);
// reverse motion system.
// points of position with points in the center and continuous connection of vectors at points of separation are indicated
arr1_=
Line14 = Vector1(0; 0.5); Vector4(0; 0.5);
Line15 = Vector1(0.5; 1); Vector5(0; 0.5);
Line13 = Vector1(0; 0.5); Vector3(0; 0.5);
Line34 = Vector3(0; 0.5); Vector4(0; 0.5);
Line117 = Vector1(0.5; 1); Vector17(0; 0.5);
Line12 = Vector1(0.5; 1); Vector2(0.5; 1);
Line217 = Vector2(0.5; 1); Vector17(0; 0.5);
Line25 = Vector2(0.5; 1); Vector5(0; 0.5);
Line212 = Vector2(0.5; 1); Vector12(0; 0.5);
Line213 = Vector2(0.5; 1); Vector13(0; 0.5);
Line112 = Vector1(0.5; 1); Vector12(0; 0.5);
Line37 = Vector3(0; 0.5); Vector7(0; 0.5);
Line316 = Vector3(0; 0.5); Vector16(0; 0.5);
Line310 = Vector3(0; 0.5); Vector10(0; 0.5);
Line410 = Vector4(0; 0.5); Vector10(0; 0.5);
Line416 = Vector4(0; 0.5); Vector16(0; 0.5);
Line47 = Vector4(0; 0.5); Vector7(0; 0.5);
Line512 = Vector5(0; 0.5); Vector12(0; 0.5);
Line513 = Vector5(0; 0.5); Vector13(0; 0.5);
Line517 = Vector5(0; 0.5); Vector17(0; 0.5);
Line62 = Vector6(0; 0.5); Vector2(0; 0.5);
Line63 = Vector6(0; 0.5); Vector3(0.5; 1);
Line618 = Vector6(0; 0.5); Vector18(0; 0.5);
Line69 = Vector6(0; 0.5); Vector9(0; 0.5);
Line615 = Vector6(0; 0.5); Vector15(0; 0.5);
Line79 = Vector7(0.5; 1); Vector9(0.5; 1);
Line78 = Vector7(0.5; 1); Vector8(0.5; 1);
Line89 = Vector8(0.5; 1); Vector9(0.5; 1);
Line1012 = Vector10(0.5; 1); Vector12(0.5; 1);
Line1011 = Vector10(0.5; 1); Vector11(0.5; 1);
Line1112 = Vector11(0.5; 1); Vector12(0.5; 1);
Line1315 = Vector13(0.5; 1); Vector15(0.5; 1);
Line1314 = Vector13(0.5; 1); Vector14(0.5; 1);
Line1415 = Vector14(0.5; 1); Vector15(0.5; 1);
Line1617 = Vector16(0.5; 1); Vector17(0.5; 1);
Line1618 = Vector16(0.5; 1); Vector18(0.5; 1);
Line1718 = Vector17(0.5; 1); Vector18(0.5; 1);
Line64 = Vector6(0.5; 1); Vector4(0.5; 1);
Line65 = Vector6(0.5; 1); Vector5(0.5; 1);
Line611 = Vector6(0.5; 1); Vector11(0; 0.5);
Line614 = Vector6(0.5; 1); Vector14(0; 0.5);
Line68 = Vector6(0.5; 1); Vector8(0; 0.5);
// Expansion of the reverse system
// Superimposed on top in the same position with additional conditions.
Vector1 * 2, Vector4 * 2, Vector3 * 2, Vector10 * 2, Vector7 * 2, Vector16 * 2;
// if not, then
Vector1 * x ^ (x + 1), Vector4 * x ^ (x + 1), Vector3 * x ^ (x + 1), Vector10 * x ^ (x + 1), Vector7 * x ^ (x + 1), Vector16 * x ^ (x + 1);
Vector3 * (2), Vector6 * 2, Vector2 * 2, Vector9 * 2, Vector15 * 2, Vector18 * 2;
// if not, then
Vector3 * (x ^ (x + 1)), Vector6 * x ^ (x + 1), Vector2 * x ^ (x + 1), Vector9 * x ^ (x + 1), Vector15 * x ^ (x + 1), Vector18 * x ^ (x + 1);
Vector1 * (2), Vector2 * (2), Vector5 * 2, Vector12 * 2, Vector13 * 2, Vector17 * 2;
// if not, then
Vector1 * (x ^ (x + 1)), Vector2 * (x ^ (x + 1)), Vector5 * x ^ (x + 1), Vector12 * x ^ (x + 1), Vector13 * x ^ (x + 1), Vector17 * x ^ (x + 1);
Vector4 * (2), Vector5 * (2), Vector6 * (2), Vector8 * 2, Vector11 * 2, Vector14 * 2;
// if not, then
Vector4 * (x ^ (x + 1)), Vector5 * (x ^ (x + 1)), Vector6 * (x ^ (x + 1)), Vector8 * x ^ (x + 1), Vector11 * x ^ (x + 1), Vector14 * x ^ (x + 1);
x = 2^n
n = 1++
int r1 = sum(arr1_; sum[arr1_; G]);
G = Line g
// the sum of all vectors, in an arbitrary order g, in the range from 1 to 18
arr1_ = sum(0.5*Vector1 + 0.5*VectorN; 0.5*Vector18 + 0.5*VectorN);
0.5*Vector1 + 0.5*VectorN = 1 or 0.5
arr2 = sum(Vector18; Vector36);
arrn = sum(18*n; n+18);
// number of combinations and setting of numerical conditions
K(arr1) = 2 ^ x  1
// if not, then
K(arr1_) = 2 ^ x  1
// number of algorithms max movement on a diverging row
:Vectorn
(arr1  1) ^ 2 = 17 ^ 2 = 289
(arr_1  1) ^ 2 = 17 ^ 2 = 289
(arrn  1) ^ 2
(arr_1  1) ^ 2
// max movement arr1
:K_maxr
= sum[1; (Vectorn2)] + (Vectorn + Vectorn+1) or
= sum(Vector1; Vector16) + (Vector n + Vector n+1);
// min movement arr1
:K_minr
= Vectorn +(arrnVector g)
// max movement arr1_
: K_maxr
= sum[1; (Line n  2)] + (Line n + Line n + 1)
// min movement arr_1
: K_minr
= Vectorn + (arr_n  Line g)
// the numerical designation must be replaced on vector combinations from maximum
// to minimum in the system arr1 and vice versa in the system arr1_
I'm not so good at writing code. Moreover, it is necessary to take into account all the subtleties of the lowlevel environment. It is difficult to explain, but if short, it is necessary to create an artificial vector environment inside the existing numerical (point). When compliance with several specific conditions, will be folded into a figurative array. Which, in turn, is the composition of the new elements (geometric) composite units appearing in space. If you are interested, I can tell you in more detail. I do not scream with pride about the prospects and uniqueness. But I am sure that this method of calculation has never been applied before. This is just an idea. And if it seems stupid to you, that's your opinion.





Too much code. Meaningless variable names. An "emulator" that emulates "what"?
You're going to have to be a lot more "interesting" for anyone else to take any interest.
Is this another one where you need an NDA before you over the "important points"?
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





Going to give you a small bio just so you know I have a feel for what (I think) you're trying to do. I'm old, but (to my knowledge) wrote the first flight simulator with shading including a helmet you wore to display the world depending on where you turned your head. This was 1984. At that time I had to write a fast transcendental calculator (nearly 100x faster than the then blossoming Math CoProcessors; not as precise but good enough for the app). During that time frame I read a book called, "Flight Simulators" by Dr. Bruce Artwick and a fellow that was shafting us $30$40 a pop for new fixes to his DOS OS named Bill Gates. [My opinion of Mr. Gates improved drastically when he donated $2 billion to world health.].
My brother had recommended the book, my comment was, "These guys know their math (linear algebra) but can't write code worth sh..". Mr. Gates openly admitted this fact in an interview.
When asked "Just how good is S.... (me)?" in class one day, a 25 year Calculus teacher remarked, "He's certainly the best I've ever had and maybe the best I've ever met.".
Now that my back is throbbing from all the selfpatting. I'll start the humble path. I'm "out of shape". Math is like distance running. A guy who can break 4 minutes for the mile (I got THAT close but never did) is only as good as his recent workouts. The same is true for math. I am WAY out of shape but hope my intuition is still decent.
When you say "solve more complex problems", I beg to differ. Tensor mathematics, matrices and the like introduce methodology shorthands and NOT the ability to solve unique types of problems (like calculus, differential equations and Laplace Transforms do). Even the latter is solved using the four basics (multiplication, division, etc.) in the end but they introduce the logic required to deduce those basic operations.
Let's take a discussion of matrices for example. Given a set of simultaneous equations you can reduce them to a matrix and it and the rules present easy visualization (for human eyes, not code) and logicless (strictly rule based) solution methods (which makes coding easier).
A clever coder can solve simultaneous equations in code w/o matrices but a garden variety coder can do it with matrices. My impression of what's going on here may be similar. You're trying to introduce a visualization method and set of rules to bypass the complexity of "the other way". What you will not introduce is ability to solve problems that were deemed unsolvable by other methods. I hope I'm wrong and we get to read about your receipt of the Nobel Prize for Mathematics one day.
If you actually want help writing the code then write a function that calculates ONE calculation you want to perform no matter how ugly.
Perhaps someone familiar with tensor math (or something similar) can point you to the latest and greatest technique (like using the GPU to help). BTW, physicists have been coding solutions for as long as there have been PCs (starting with the Manhattan Project) and, because they are the tools of the trade, use math as a second language. After a lifetime of experience, I have come under the impression that nowadays anybody can write code (thanks in no small part to the efforts of the aforementioned, Mr. Bill Gates) but only mathematicians can write the really hard stuff. Again, thanks to the efforts of many, 99% (probably more like 99.9999%) of all coding requires little math skill. Yours appears to be the exception.
Good luck. Apologies for the long post... I almost never post on anything, this one caught my eye looking for something else. All the best.





Hi all,
I'm hoping someone can help me out with an algorithm question I got stuck on, while contemplating optimal scheduling algorithms (as one does lol).
Consider a basic task scheduling system, where there's a queue of "Tasks" that need to get done, a pool of "Workers" that can execute work (one at a time), and a scheduler that pulls work form the queue and assigns to the workers in some fashion.
( [Task1] [Task2] [Task3] ... [TaskT] ) > [Scheduler] ===> ( [Worker1] [Worker2] ... [WorkerN] )
I've been thinking about various optimal scheduling algorithms for such a system, with various parameters.
For this discussion, let's assume:
 The work is uniform (all Task units are exactly the same).
 "Optimal Scheduling" just means "If I add a bunch of work to the queue, all of it will be completed ASAP."
The Trivial Case
If, in addition to tasks being uniform, all workers are perfectly uniform, it's easy to see that assigning the work to the first available free worker is optimal, and that the scheduler can just be a dumb roundrobin allocator of work.
The Heterogeneous Worker Case
Suppose we have an added constraint: Not all workers are the same; they take different amounts of time to perform a work unit. Maybe they run on a heterogeneous fleet, and some are on better hardware....
In that case, if we want optimal scheduling, we can do so greedily, by always assigning to the worker that will become available the soonest given one more unit of work (i.e. "store workers in a priority queue sorted by 'when it will be done with all its work, given one more unit of work'")
The Heterogeneous Worker with Scheduled Work Case
Ok, THIS is the one I am stuck on.
Suppose, in addition to the workers being different, we ALSO have a constraint where each unit of work becomes available at a specific time t into the execution.
The aforementioned greedy approach won't work anymore. Consider this example:
Workers:
 W1 takes 8 minutes to finish a work item.
 W2 takes 10 minutes to finish a work item.
Work:
 Task1 is available immediately (t = 0)
 Task2 is available one minute in (t = 1)
With the greedy algorithm: T1 will be assigned to W1, and T2 will be assiged to W2 one minute in, resulting in all work completing after 11 minutes.
However, the optimal solution is, T1 gets assigned to W2, and T2 gets assigned to W1, making T1 the long pole, and allowing the work to complete in only 10 minutes.
Intuitively, it kind of feels like this has to become a search/backtracking problem, perhaps with heavy pruning of nonsensical choices, but... is there a simpler solution I'm missing? I keep getting the sense that there's a viable DP approach, if we're generous with things like declaring all times are integers, and throwing everything at the slowest worker is guaranteed to take less than some reasonably small amount of time...
... or maybe there IS a greedy solution, and I'm just not seeing it...
Any thoughts/direction here?






Thanks for your response!
Sorry if I wasn't clear; in that last example, we do know exactly how long a specific worker would take to finish any task (the tasks are all the same, and it's just based on the efficiency of the worker). It's a totally contrived example.





Trying to (automatically) schedule based on an "individual's performance levels" is "insane".
"Capacity" (planning) is based on history for a particular "type of resource": e.g. Senior Analyst "hours"; etc.
If a "resource" is not performing, you "demote" them and assign them to a different "resource group", instead of trying to factor in "individual" performance factors.
One estimates based on x hours of resource (type) A and resource type B, etc.
Not 10 hours of "Sally" OR 12 hours of "Bob", if "Sally is not available".
It's 10 hours of "Junior Programmer" time (or whatever), so if "Bob" leaves, you're not looking for another Bob.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





I'm creating a chess engine and I've tried to implement futility pruning. Results are sometimes worse than without it. Could somebody tell me what I'm doing wrong? My margin is set on 601 (pawn is set 200, knight/bishop 600 for example) Thanks
Same position
without futility on depth 5
Score: 90
Nodes: 89383
Depth: 5
Time: 2011ms
with futility on depth 5
Score: 90
Nodes: 85808
Depth: 5
Time: 1867ms
without futility on depth 6
Score: 16
Nodes: 836935
Depth: 6
Time: 12360ms
with futility on depth 6
Score: 65
Nodes: 855792
Depth: 6
Time: 12544ms
<pre>private int negamax(Position node, int depth, int alpha, int beta, int color, byte castling) {
if(depth == 0) {
return quiescenceSearch(node, alpha, beta, color);
}
if(depth == 1) {
int staticEval = color * countScore(node.pieces, depth, node.prevI, node.i);
if(staticEval + margin < alpha) {
return staticEval;
}
}
ArrayList<Move> possibleMoves = generateMove(node.pieces, node.prevI, node.i, getMoveByColor(color), castling);
if(possibleMoves.size() == 0) {
return color * countScore(node.pieces, depth, node.prevI, node.i);
}
orderMoves(possibleMoves, node.pieces);
int value = MIN;
for (Move possibleMove : possibleMoves) {
Piece tempPiece = node.pieces[possibleMove.getI()];
int prevPiece = node.pieces[possibleMove.getPrevI()].getType();
node.pieces[possibleMove.getPrevI()].move(possibleMove.getI(), possibleMove.getPrevI(), node.pieces, 0);
byte tempCastling = setCastling(node.prevI, node.i, castling);
Move temp = new Move(node.i, node.prevI);
node.prevI = possibleMove.getPrevI();
node.i = possibleMove.getI();
int tempValue;
tempValue = negamax(node, depth  1, beta, alpha, color, tempCastling);
value = Math.max(value, tempValue);
undo(possibleMove.getI(), possibleMove.getPrevI(), node.pieces, tempPiece, prevPiece);
node.prevI = temp.getPrevI();
node.i = temp.getI();
alpha = Math.max(alpha, value);
if (depth == searchDepth) {
if (value > bestMoveValue) {
bestMoveValue = value;
bestMove = new Move(possibleMove.getI(), possibleMove.getPrevI());
}
}
}
return value;
}





My teacher seriously glossed over finding values of c and n0 for big O and big Omega.
For big Omega he said add coefficients and if there's a negative, flip the value.
Now I have a question asking me to prove 7n2 is Ωn
By his explanation, c = 1/9
Which would give me (1/9)n <= 7n2
Which seems unnecessary since c and n0 could be 1 and it would still hold that cn is <= 7n2
Any thoughts, comments, helpful suggestions?





I am finishing up a course and it got me thinking about algo design. There were 21 people in our group for a 4 week course. Each week, we were split into 5 separate groups. Ideally, you would never have anyone in the same group with anyone that they were in a group with previously. In actuality, this was poorly executed. For instance, I was in a group with Sally from London in 3 of the 4 weeks.
I was thinking to create an algo for them next time so they don't have the problem. When I got to thinking about it I could only think to do it by brute force. Essentially, loop through each person until groups were made with minimal repetition. This would work when dealing with a group of 21, needing 5 subgroups of 4 or 5, 4 times. However, this would become unmanageable if the group was 100 times the size.
So the question is, starting with a group of size X, how do you divide it into Y groups, Z times, with the goal of minimizing overlap.
I imagine this is a simple problem using high school level combinatorics, but I can't seem to get started.





This kind of problem is treated in the mathematical discipline called Block design  Wikipedia[^] .
Unfortunately it is more than 30 years ago I had some lectures on this topic at university, so I can't remember any actaul algorithm. But maybe you can start searching for algorithms in this category.
I dimly remember that there are algorithms for finding specific designs that are better than brute force, by recombining solutions from a smaller scaled problem into solutions for a larger problem. but I'm not sure that would work for this problem.
Good luck!
P.S.: assuming you're not a student of mathematics, I suggest you try the idea I described, i. e. scaling up the problem incrementally. E. g. you solve the problem for a group of size Y (trivial), then you increase the group size one by one and solve the problem for that size, based on the previous results. Then you keep increasing the group and finding the best solutions for the new size.
To get a new best distribution at any step, use the best distribution(s) from the previous step, and test all valid options for adding the new group member. Then decide which one(s) cause the least overlap and proceed.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
modified 8Aug18 7:59am.





Your end goal is a "list" of "groups"; therefore, your only recourse is "brute force".
What an "algo" gives you is "the number of combinations for x taken n at a time"; i.e. a "number".
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





Gerry Schmitz wrote: Your end goal is a "list" of "groups"; therefore, your only recourse is "brute force".
I beg to disagree. See my answer above. The mathematical discipline of block designs deals with such problems and algorithms to approach them. The discipline wouldn't exist if it weren't possible to solve these problems without resorting to brute force.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





I consider anything that has to be "iterated" as "brute force".
Sorry, but your "answer" was a nonanswer: Quote: ... can't remember any actaul algorithm .... I'm not sure that would work for this problem ...
I "can" come up with a "routine" to produce the list in question.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





Gerry Schmitz wrote: I consider anything that has to be "iterated" as "brute force".
Then we have a different understanding of brute force. In my book, brute force means testing all possible solutions. The number of possible group subdivisions for a group of size N is exponential in N. The effort of solving a combinatorial problem by brute force is (believed to be  see link below) exponential. Whereas the algorithm I described above is polynomial.
If you believe that is the same, then that is equivalent to saying P=NP[^].
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





You are right that this is a way too crude way to look at this from a mathematical view. Integration is a good example, as brute force summation can be done, but does not always give the correct answer, so Runge Kutta methods are (usually) used instead on nonstiff problems.





If you actually looked at the problem requirements, you would see that the "state" at stages 0 and 1 are clearly defined; and very limited (no "joins"; then 4,4,4,4,5 in any combination).
As since the previous "state" is recursively fed to the next stage in order to set the context for that stage, it should be obvious that the number of "unique" combos rapidly dwindles to 0; and you are simply now trying the "minimize" the objective function ("discord") for the remaining "timelimited" horizon (i.e. "classes").
Classic dynamic programming recursion.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





What you have is a "scheduling problem".
It is an example of "dynamic programming".
Dynamic programming does not supply a "one size fits all"; it generally has to be tailored to the "problem".
And in general, dynamic programming is reserved for those situations where "linear" methods are not suitable.
This is one of those cases; like quantum computing, "exclusivity" was recently upended by an 18 year old who showed "classical computing" is still in competition with quantum computing.
Op wanted a "solution", not just theory.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal





While I don't think there is any 'scheduling' involved (no time constraints, order of group subdivisions irrelevant), I agree that it is a problem that needs a tailored solution. I suggested a recursion over total group size, you suggested a recursion over the required number of group distributions. I think both concepts are equally suitable, and both will need to be tuned to the problem to turn them into workable solutions.
I see you are using terms from computer science, whereas I am using mathematical concepts. That doesn't mean the ideas they represent are so much different, or incompatible. It's more like one of us is discussing the optimal temperature wheras the other insists on a specific color  we're not disagreeing as far as I can see
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





Member 13939652 wrote: but I can't seem to get started.
Start by solving a subproblem by hand with a sheey of paper and a dew colored pencils.
First draw a table of peoples (21x21), then build a solution as long as there is no overlap.
Use a different color by turn (week).
in the table, fill cells where every pair of people are in same group the same week. Go as far as you can without overlap.
The way you did it is the base of your algorithm. Then when you deal with overlap, see how to minimize the number of people that were in sale group the same week.
In a group of 5, each member is pairing with 4 other people.
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
modified 1Sep18 23:16pm.





I'm aware this is a lazy question
All my hope are in you: @Stefan63
Is there a guide how to prevent from ending a "Simplex Downhill" to end in a local Minimum?
Sorry and thanks for any hint.
It does not solve my Problem, but it answers my question
modified 19Jan21 21:04pm.





Hello.
In general, there is no way to prevent ending up in a local minimum.
However, in some special cases it may be possible. E. g. if you want to find the minima of a polynomial function, you can instead search for the roots of its derivative, and use Sturm's theorem  Wikipedia[^] to enumerate and localize every single root.
Otherwise there are some methods that can increase the chance of homing in on the global minimum. You can use Simulated annealing  Wikipedia[^] to prevent getting stuck in a local minimum: basically this algorithms shakes up the current best solution to give it a chance getting out of a minimum. The chance of doing so is smaller for better solutions, therefore the hope is to end up in the best one  the global minimum. Also, the chance of switching to another (local) minimum is decreased over time to ensure that the algorithm comes to an end.
Or you could use a Genetic algorithm  Wikipedia[^] to find a solution that is pretty much guaranteed to be at least close to the global minimum. Basically this is a guided trialanderror search algorithm which can be programmed to always keep the current best solution(s). This algorithm is more suitable for discrete or combinatorial problems. But it can also be applied to continuous problems.
If you can tell me more about the specific kind of problem, I may be able to offer better advice.
P.S.: I didn't realize that "Simplex Downhill" may refer to a specific algorithm. The term 'Downhill' by itself is often used in conjunction with many minimization algorithms. But did you mean this[^] ? If so, the above suggestions are of course of little help, unless you are looking for ways to modify that algorithm using ideas from other minimization techniques. E. g. the basic concept of Simulated Annealing is to allow steps of arbitrary size with a low likelyhood for very large steps. You could use that idea by allowing new points to be constructed relatively far away, potentially nearer to another local minimum.
Unfortunately I have no experience with the NelderMead method myself.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
modified 6Aug18 9:39am.




