First, it's not my homework, I've done my semester and I'm studying for my finals.
I find the course of Algorithms very difficult for me, and I would like to go over some exercises from past finals.
Here is one:
Given an array of jobs with different time requirements. There are K identical assignees available. Suggest an algorithm that, given K identical assignees, time-requirement for n jobs (t1, ..., tn), and time T1 answers "Yes"if it's possible to split the jobs among the assignees such that the maximum time required for each assignee to accomplish his jobs is at most T1. Otherwise, answer "No".
The following constraints:
1. An assignee can be assigned only contiguous jobs. For example, an assignee cannot be assigned jobs 1 and 3, but not 2.
2. Two assignees cannot share (or co-assigned) a job, i.e., a job cannot be partially assigned to one assignee and partially to other
I thought of the greedy way - going over the job times, and sum them up to each assignee as far as possible. Also, I don't know if some sorting can help to make it more efficient.
Dynamic programming way - can't even start thinking about solution using DP. We are told to observe sub-problems of our problem etc... But I don't even see some suitable sub-problem. Can someone direct me please? How do you start thinking about solution using DP?
From the dark days of my project management life I think this is what we used to call CPA: Critical Path Analysis. So you need to build a picture of each stream where jobs must run in sequence, and how long they take. You can then figure out how many can actually run, given the number of assignees available. I have only ever done this manually by drawing a diagram on a blackboard, but I know there are plenty of packages around to help (even Microsoft Project). So start with some paper diagrams to get your thoughts clear on how to approach it in terms of logical steps.
BTW, the statement "an assignee cannot be assigned jobs 1 and 3, but not 2." does not make sense, as both parts are negative.
Actually, that doesn't help me to reach to something interesting...
I look at that this way: iterate over the jobs, summing their time-requirement up, and each time I sum another one up - I check if an assignee is able to... Actually, I don't know even what my condition has to be at all
I've never worked on this problem. Coming up with a solution is one thing, but coming up with the optimal solution is quite another. It seems to resemble the bin packing problem[^], which is NP-complete, but it has the additional constraint of having to assign contiguous jobs.
Checking whether the time limit of T1 can be met is easier than optimizing from scratch. You don't need DP, you don't need sorting, your greedy solution already does the job. It only takes linear time and there is nothing better you could do in that sense: the input all needs to be seen anyway so anything will take at least linear time.
Search for the lowest T1 such that the previous algorithm returns True.
For example, that can be done in O(log(S)) where S is the Sum of all times, doing it in O(S) would not be polynomial time because S can be exponential in the size of the input (imagine if the times are very few but very large integers), but log(S) is fine.
Hmm so you actually direct me to some approach where I have to only divide each time by 2.
The logic behind that, I think, is:
If at first we see that we can split our jobs (Total time = S) such that T1 = S (That is, K equals at least to 1), then let's find out if we can split it to 2 assignees equally - now we'll send to our first algorithm as a parameter: 0.5*S.
So we divide by 2 and send to the first algorithm, so on...
Am I right?
BTW, how do I think about proving the optimality of such an algprithm?
@harold aptroot -
Sorry, but I'm not sure I got your point.
I agree if you mean the following: In case we've divided S by 2, and Algorithm1(T1) == TRUE, then we know for sure that T_Optimal <= T1.
So the algorithm should go as I described before, no? With the division each time by 2. But it's probably not sufficient as we may skip on some T1 which gives FALSE, and from there we don't know the amount of time units to add up...
Genius. It reminds me a little bit the procedure of Partition in QuickSort.
So we sort with Algorithm1(S), if it returns TRUE we call Algorithm(0.5 * S), and so on...
If at some point (after i steps) we call Algorithm1(0.5^i * S), and it returns FALSE, we call Algorithm1(0.5 * (0.5^i * S + 0.5^(i-1) * S)), and so on...
Hmmm actually now I'm not sure about my suggestion.
Should I actually initialize an array of size S with all the natural numbers from 1 to S?
If the range was [25, 50] and 38 was picked as the midpoint and Algorithm1(38) is True, then the next range is [25, 38]. No problems, just updating the endpoint of the range. Taking the average of 25 and 38 ("divide by 2" is only what happens when the startpoint is zero) also does not cause a problem. Taking an average never "escapes" from the range.
For some reason I can't explain I don't see your last comment on this topic, where you mentioned the idea of saving the start-point and end-point. (I see it through the notifications on the site and in my mail)
Amazing, simple and genius.
So we actually suggested a polynomial greedy algorithm.
BTW, how do you approach to prove the correctness, specifically the optimality of this algorithm?
Thank you so much!!
Last Visit: 31-Dec-99 19:00 Last Update: 2-Dec-22 16:21