
Actually, that doesn't help me to reach to something interesting...
I look at that this way: iterate over the jobs, summing their timerequirement up, and each time I sum another one up  I check if an assignee is able to... Actually, I don't know even what my condition has to be at all
Frustrating...





Sorry, that is the best I can offer, it's been more than 30 years.





I've never worked on this problem. Coming up with a solution is one thing, but coming up with the optimal solution is quite another. It seems to resemble the bin packing problem[^], which is NPcomplete, but it has the additional constraint of having to assign contiguous jobs.





Hey, I've edited the problem.
It seems be different now from the bin one.





You need to provide an accurate description of the problem the first time so that people don't waste their time.





Checking whether the time limit of T1 can be met is easier than optimizing from scratch. You don't need DP, you don't need sorting, your greedy solution already does the job. It only takes linear time and there is nothing better you could do in that sense: the input all needs to be seen anyway so anything will take at least linear time.





Okay.
How can I know at all if my algorithm is "good enough" and "does the work"?
Another way of asking my question: how does one think of the correctness proof? The optimality and lawfulness?
Thank you!
The next section of the problem asks us for the following:
Use the first algorithm (...at most T1...) to describe a polynomial algorithm calculating the optimal time T to accomplish all the jobs.
What's the approach?
**Greg Utas**: You're right  won't happen again.
modified 26Jun20 7:52am.





Search for the lowest T1 such that the previous algorithm returns True.
For example, that can be done in O(log(S)) where S is the Sum of all times, doing it in O(S) would not be polynomial time because S can be exponential in the size of the input (imagine if the times are very few but very large integers), but log(S) is fine.





Hmm so you actually direct me to some approach where I have to only divide each time by 2.
The logic behind that, I think, is:
If at first we see that we can split our jobs (Total time = S) such that T1 = S (That is, K equals at least to 1), then let's find out if we can split it to 2 assignees equally  now we'll send to our first algorithm as a parameter: 0.5*S.
So we divide by 2 and send to the first algorithm, so on...
Am I right?
BTW, how do I think about proving the optimality of such an algprithm?
Thank you!





You can't choose K, right?
A useful propery of Algorithm1 is that:
 if Algorithm1(T1) is false, then Algorithm1(T1k) is also false
 if Algorithm1(T1) is true, then Algorithm1(T1+k) is also true
Or in other words, there is a threshold below which it returns false (not enough time) and above which it returns true (enough time).





I agree obviously that this property indeed holds, but... I can't see how I should use it in some manner of dividing each time the total time by some constant (log(S)).
Maybe you can give me another clue? Some approach?
Thank you!





If T1 was chosen in the middle of a range, then whichever result Algorithm1(T1) gives, half of the range will be uninteresting (either all False or all True).





@harold aptroot 
Sorry, but I'm not sure I got your point.
I agree if you mean the following: In case we've divided S by 2, and Algorithm1(T1) == TRUE, then we know for sure that T_Optimal <= T1.
So the algorithm should go as I described before, no? With the division each time by 2. But it's probably not sufficient as we may skip on some T1 which gives FALSE, and from there we don't know the amount of time units to add up...





When the result for T1 is False, that also cuts off half of the current range: the first half.





Genius. It reminds me a little bit the procedure of Partition in QuickSort.
So we sort with Algorithm1(S), if it returns TRUE we call Algorithm(0.5 * S), and so on...
If at some point (after i steps) we call Algorithm1(0.5^i * S), and it returns FALSE, we call Algorithm1(0.5 * (0.5^i * S + 0.5^(i1) * S)), and so on...
Hmmm actually now I'm not sure about my suggestion.
Should I actually initialize an array of size S with all the natural numbers from 1 to S?
Thank you!
modified 26Jun20 16:56pm.





Member 14873814 wrote: It reminds me a little bit the procedure of Partition in QuickSort. It has a bit of that flavour. What it reminds me of is Binary Search (well that's what it is, but a specific variant).
Member 14873814 wrote: Should I actually initialize an array of size S with all the natural numbers from 1 to S? No need, you can pass the index itself straight into Algorithm1.





What index exactly if there is no an array?





If it was a "by the book" Binary Search, there would be an index into the array of data. In this case the data does not explicitly exit, the array is replaced by a function (Algorithm1).





But I'm not sure how I should treat the data in this case.
Let's suppose our sum of job times is S = 100.
We have k = 3 assignees.
We start by calling Algorithm1(100) == TRUE of course
Now we are calling Algorithm1(50) == TRUE
Now Algorithm1(25) == FALSE
So in this case we want to check the middle between [25, 50], so we can get that by calculating 0.5(*25 + 25*2), so that now we call Algorithm1(38) which returns TRUE
But what now? We would divide again 38 by 2 and go out of the range [25, 50].
See the problem?





If the range was [25, 50] and 38 was picked as the midpoint and Algorithm1(38) is True, then the next range is [25, 38]. No problems, just updating the endpoint of the range. Taking the average of 25 and 38 ("divide by 2" is only what happens when the startpoint is zero) also does not cause a problem. Taking an average never "escapes" from the range.





@harold aptroot
For some reason I can't explain I don't see your last comment on this topic, where you mentioned the idea of saving the startpoint and endpoint. (I see it through the notifications on the site and in my mail)
Amazing, simple and genius.
So we actually suggested a polynomial greedy algorithm.
BTW, how do you approach to prove the correctness, specifically the optimality of this algorithm?
Thank you so much!!





The "sub problem" is to not assign more "job time" than T1 to any one assignee.
Since there was no constraint that said an assignee could not be "idle" (or "optimal"), one can simply read contiguous jobs (in say reverse time order) and assign "blocks" until they're used up or assignees are at capacity.
The "dynamic" part is the looping through the job list and assigning them (depleting the job list while adding to an assignee's jobs.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Hey Gerry,
I'm not sure you refer to the next section of the problem we're dealing with now:
The next section of the problem asks us for the following:
Use the first algorithm (...at most T1...) to describe a polynomial algorithm calculating the optimal time T to accomplish all the jobs.
I think you've suggested a greedy algorithm for the first section of the problem which is written at the main post.





Hi,
I was wondering if anyone knew whether the implementation of the
hash function in a hash map is the same as in a hash table, i.e. are
the same implementation options available for a hash map?
Also, I wonder why Hash tables were deprecated in Java 5 other than
having multiple locks in the ConcurrentHashMap class and whether
ConcurrentHashMap implements the same hash functions as a Hash Table.
This is likely a question for Oracle or Microsoft or for someone who has worked there. If anyone knows, please share.
Thanks,
Saad





The hash function should be identical for a hash table and a hash map; in both cases you go from a key to a table index.



