
What is being done in following algorithm?
dmin ← ∞
for i ← 0 to n − 1 do
for j ← 0 to n − 1 do
if i != j and A[i] − A[j] < dmin
dmin ← A[i] − A[j]
return dmin





This smells of a homework assignment or a takehome test.
Walk through the code by hand, stepbystep, and start writing values down. You might find it pretty easy to figure out what it's doing.





It is not doing anything, since n and A are not defined.





Hey there.
First, it's not my homework, I've done my semester and I'm studying for my finals.
I find the course of Algorithms very difficult for me, and I would like to go over some exercises from past finals.
Here is one:
Given an array of jobs with different time requirements. There are K identical assignees available. Suggest an algorithm that, given K identical assignees, timerequirement for n jobs (t1, ..., tn), and time T1 answers "Yes" if it's possible to split the jobs among the assignees such that the maximum time required for each assignee to accomplish his jobs is at most T1. Otherwise, answer "No".
The following constraints:
1. An assignee can be assigned only contiguous jobs. For example, an assignee cannot be assigned jobs 1 and 3, but not 2.
2. Two assignees cannot share (or coassigned) a job, i.e., a job cannot be partially assigned to one assignee and partially to other
I thought of the greedy way  going over the job times, and sum them up to each assignee as far as possible. Also, I don't know if some sorting can help to make it more efficient.
Dynamic programming way  can't even start thinking about solution using DP. We are told to observe subproblems of our problem etc... But I don't even see some suitable subproblem. Can someone direct me please? How do you start thinking about solution using DP?
Thanks in advance.





From the dark days of my project management life I think this is what we used to call CPA: Critical Path Analysis. So you need to build a picture of each stream where jobs must run in sequence, and how long they take. You can then figure out how many can actually run, given the number of assignees available. I have only ever done this manually by drawing a diagram on a blackboard, but I know there are plenty of packages around to help (even Microsoft Project). So start with some paper diagrams to get your thoughts clear on how to approach it in terms of logical steps.
BTW, the statement "an assignee cannot be assigned jobs 1 and 3, but not 2." does not make sense, as both parts are negative.





Actually, that doesn't help me to reach to something interesting...
I look at that this way: iterate over the jobs, summing their timerequirement up, and each time I sum another one up  I check if an assignee is able to... Actually, I don't know even what my condition has to be at all
Frustrating...





Sorry, that is the best I can offer, it's been more than 30 years.





I've never worked on this problem. Coming up with a solution is one thing, but coming up with the optimal solution is quite another. It seems to resemble the bin packing problem[^], which is NPcomplete, but it has the additional constraint of having to assign contiguous jobs.





Hey, I've edited the problem.
It seems be different now from the bin one.





You need to provide an accurate description of the problem the first time so that people don't waste their time.





Checking whether the time limit of T1 can be met is easier than optimizing from scratch. You don't need DP, you don't need sorting, your greedy solution already does the job. It only takes linear time and there is nothing better you could do in that sense: the input all needs to be seen anyway so anything will take at least linear time.





Okay.
How can I know at all if my algorithm is "good enough" and "does the work"?
Another way of asking my question: how does one think of the correctness proof? The optimality and lawfulness?
Thank you!
The next section of the problem asks us for the following:
Use the first algorithm (...at most T1...) to describe a polynomial algorithm calculating the optimal time T to accomplish all the jobs.
What's the approach?
**Greg Utas**: You're right  won't happen again.
modified 26Jun20 7:52am.





Search for the lowest T1 such that the previous algorithm returns True.
For example, that can be done in O(log(S)) where S is the Sum of all times, doing it in O(S) would not be polynomial time because S can be exponential in the size of the input (imagine if the times are very few but very large integers), but log(S) is fine.





Hmm so you actually direct me to some approach where I have to only divide each time by 2.
The logic behind that, I think, is:
If at first we see that we can split our jobs (Total time = S) such that T1 = S (That is, K equals at least to 1), then let's find out if we can split it to 2 assignees equally  now we'll send to our first algorithm as a parameter: 0.5*S.
So we divide by 2 and send to the first algorithm, so on...
Am I right?
BTW, how do I think about proving the optimality of such an algprithm?
Thank you!





You can't choose K, right?
A useful propery of Algorithm1 is that:
 if Algorithm1(T1) is false, then Algorithm1(T1k) is also false
 if Algorithm1(T1) is true, then Algorithm1(T1+k) is also true
Or in other words, there is a threshold below which it returns false (not enough time) and above which it returns true (enough time).





I agree obviously that this property indeed holds, but... I can't see how I should use it in some manner of dividing each time the total time by some constant (log(S)).
Maybe you can give me another clue? Some approach?
Thank you!





If T1 was chosen in the middle of a range, then whichever result Algorithm1(T1) gives, half of the range will be uninteresting (either all False or all True).





@harold aptroot 
Sorry, but I'm not sure I got your point.
I agree if you mean the following: In case we've divided S by 2, and Algorithm1(T1) == TRUE, then we know for sure that T_Optimal <= T1.
So the algorithm should go as I described before, no? With the division each time by 2. But it's probably not sufficient as we may skip on some T1 which gives FALSE, and from there we don't know the amount of time units to add up...





When the result for T1 is False, that also cuts off half of the current range: the first half.





Genius. It reminds me a little bit the procedure of Partition in QuickSort.
So we sort with Algorithm1(S), if it returns TRUE we call Algorithm(0.5 * S), and so on...
If at some point (after i steps) we call Algorithm1(0.5^i * S), and it returns FALSE, we call Algorithm1(0.5 * (0.5^i * S + 0.5^(i1) * S)), and so on...
Hmmm actually now I'm not sure about my suggestion.
Should I actually initialize an array of size S with all the natural numbers from 1 to S?
Thank you!
modified 26Jun20 16:56pm.





Member 14873814 wrote: It reminds me a little bit the procedure of Partition in QuickSort. It has a bit of that flavour. What it reminds me of is Binary Search (well that's what it is, but a specific variant).
Member 14873814 wrote: Should I actually initialize an array of size S with all the natural numbers from 1 to S? No need, you can pass the index itself straight into Algorithm1.





What index exactly if there is no an array?





If it was a "by the book" Binary Search, there would be an index into the array of data. In this case the data does not explicitly exit, the array is replaced by a function (Algorithm1).





But I'm not sure how I should treat the data in this case.
Let's suppose our sum of job times is S = 100.
We have k = 3 assignees.
We start by calling Algorithm1(100) == TRUE of course
Now we are calling Algorithm1(50) == TRUE
Now Algorithm1(25) == FALSE
So in this case we want to check the middle between [25, 50], so we can get that by calculating 0.5(*25 + 25*2), so that now we call Algorithm1(38) which returns TRUE
But what now? We would divide again 38 by 2 and go out of the range [25, 50].
See the problem?





If the range was [25, 50] and 38 was picked as the midpoint and Algorithm1(38) is True, then the next range is [25, 38]. No problems, just updating the endpoint of the range. Taking the average of 25 and 38 ("divide by 2" is only what happens when the startpoint is zero) also does not cause a problem. Taking an average never "escapes" from the range.



