The "sub problem" is to not assign more "job time" than T1 to any one assignee.
Since there was no constraint that said an assignee could not be "idle" (or "optimal"), one can simply read contiguous jobs (in say reverse time order) and assign "blocks" until they're used up or assignees are at capacity.
The "dynamic" part is the looping through the job list and assigning them (depleting the job list while adding to an assignee's jobs.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
I was wondering if anyone knew whether the implementation of the
hash function in a hash map is the same asin a hash table, i.e. are
the same implementation options available for a hash map?
Also, I wonder why Hash tables were deprecated in Java 5 other than
having multiple locks in the ConcurrentHashMap class and whether
ConcurrentHashMap implements the same hash functions as a Hash Table.
This is likely a question for Oracle or Microsoft or for someone who has worked there. If anyone knows, please share.
I always considered an algorithm to be at a rather abstract level: A principal method of solving a problem, way ahead of the coding. An algorithm can be described in prose, or if you want it in a more formal way, in pseudocode. Developing new algorithms is to develop new methods. A different way of doing sorting. Or load distribution. Or image compression. Or...
So when someone asks for new algorithm designs, they essentially ask for research work. Developing principally new methods for solvin a given problem.
I certainly don't think that Member 14843185 is inviting you to participate in a research work aimed at finding new solutions. His project is somewhere on the long line between basic research and homework. Honestly I suspect that it is leaning over to the homework side. Or at least implementation of known methods, not developing new methods.
Coding for pay. You may be cynical enough to say: Even if it turns out to be to do his homework, if he pays me for it, it is his problem that he is not learning what he should learn. Or your morals may say that it isn't right - he is fooling himself and you should not contribute to it.
If he really represents some serious development company in search of consultants to take on a project with them, you might consider it. But I think this looks like a rather strange way of hiring consultants.
Your task is to develop an algorithm that would sort data such as these from least to greatest. Specifically, given an unsorted set of N decimal values, your algorithm should sort them to give an answer of the sorted data. For thisset of N = 6, your algorithm should produce:
Execute your algorithm for a different set of data, such as a subset of the given data, data you make up, or another month's climate data, such as February 2017: https://www.ncdc.noaa.gov/sotc/global-regions/201702
Does your algorithm work for any N? Have you thought of corner cases it might need to handle, such as N = 0 or N = 1?
Im currently making an app on windows forms trying to program a micro over UART. I can achieve it, although it is taking a long time to carry out the program. I have used stopwatches to determine it is my read function taking up the majority of the time. When i try to read the micros response from each command I have to wait for all of it, which is why im using a while loop in the code below, if the returned message size is not what i expect. What im wondering is, if there is any way to speed up this process. The response from the micro should be pretty fast, its running at a baudrate of 115200, meaning the whole 512 kb file should in theory take just over 30 seconds to complete, at the moment it is more than double that, at 80 seconds.
This forum is for questions, not instruction. Feel free to write an article on it. I'd probably read it because I found graph theory to be the most useful area of mathematics when I was studying comp sci.
I remember I used to do such algorithms in the university time... but that was 20 years ago lol
instead of inventing the wheel, I'm sure there is an efficient algorithm to do the following:
I have an array of values. size is not a matter... usually it is no more than 30-40 in size.
I need to split the array into sub groups, in a way that each subgroup is most efficiently close to value X (in our case 20). i.e smallest amount of groups with the highest value closest to X=20.
(in real world - we have a roll of material, length is 20 meter... and customers asking for material cutted in sized a,b,c,d.... and we want to cut it efficiently without losing material)
Appreciate any reference!
Last Visit: 31-Dec-99 18:00 Last Update: 25-Jun-22 0:30