I always considered an algorithm to be at a rather abstract level: A principal method of solving a problem, way ahead of the coding. An algorithm can be described in prose, or if you want it in a more formal way, in pseudocode. Developing new algorithms is to develop new methods. A different way of doing sorting. Or load distribution. Or image compression. Or...
So when someone asks for new algorithm designs, they essentially ask for research work. Developing principally new methods for solvin a given problem.
I certainly don't think that Member 14843185 is inviting you to participate in a research work aimed at finding new solutions. His project is somewhere on the long line between basic research and homework. Honestly I suspect that it is leaning over to the homework side. Or at least implementation of known methods, not developing new methods.
Coding for pay. You may be cynical enough to say: Even if it turns out to be to do his homework, if he pays me for it, it is his problem that he is not learning what he should learn. Or your morals may say that it isn't right - he is fooling himself and you should not contribute to it.
If he really represents some serious development company in search of consultants to take on a project with them, you might consider it. But I think this looks like a rather strange way of hiring consultants.
Your task is to develop an algorithm that would sort data such as these from least to greatest. Specifically, given an unsorted set of N decimal values, your algorithm should sort them to give an answer of the sorted data. For thisset of N = 6, your algorithm should produce:
Execute your algorithm for a different set of data, such as a subset of the given data, data you make up, or another month's climate data, such as February 2017: https://www.ncdc.noaa.gov/sotc/global-regions/201702
Does your algorithm work for any N? Have you thought of corner cases it might need to handle, such as N = 0 or N = 1?
Im currently making an app on windows forms trying to program a micro over UART. I can achieve it, although it is taking a long time to carry out the program. I have used stopwatches to determine it is my read function taking up the majority of the time. When i try to read the micros response from each command I have to wait for all of it, which is why im using a while loop in the code below, if the returned message size is not what i expect. What im wondering is, if there is any way to speed up this process. The response from the micro should be pretty fast, its running at a baudrate of 115200, meaning the whole 512 kb file should in theory take just over 30 seconds to complete, at the moment it is more than double that, at 80 seconds.
This forum is for questions, not instruction. Feel free to write an article on it. I'd probably read it because I found graph theory to be the most useful area of mathematics when I was studying comp sci.
I remember I used to do such algorithms in the university time... but that was 20 years ago lol
instead of inventing the wheel, I'm sure there is an efficient algorithm to do the following:
I have an array of values. size is not a matter... usually it is no more than 30-40 in size.
I need to split the array into sub groups, in a way that each subgroup is most efficiently close to value X (in our case 20). i.e smallest amount of groups with the highest value closest to X=20.
(in real world - we have a roll of material, length is 20 meter... and customers asking for material cutted in sized a,b,c,d.... and we want to cut it efficiently without losing material)
I have been working on uploading a .bin file to a micro with basically a hello world project inside. Although i find that it could definitely be faster when uploading the contents of the file to the micros ram, where it then copies it to flash.
Where this data is transferred is located in the code below:
for (int i = 0; i < Page.numPages; i++)
ComPort.Write(code, start_Address_Code, 512);
ComPort.Write(code, start_Address_Code + Page.ram_Size, 512);
copy2Flash(flash_Address, "536871680", "1024");
Short of having a serial break-out box so you can monitor the signals, its difficult to know what to suggest. At 460800 baud it should take less than 15 seconds to transfer a 500kb file. What we don't know is how large a receive buffer is on the micro, and how fast it can process the data arriving there. Probably what is going on is that the micro's receive buffer gets filled, so it signals the PC to stop sending, the micro processes some data from the receive buffer and then signals the PC to start sending again.
There are software breakout boxes available if you google for it. I'd think I would grab one and take a look at the signals being generated and see if that's where your problem is.
So i think you are right, i downloaded a serial port sniffer and have determined the data from a 1024 buffer needs to be written in a queue of 45 bytes at a time to allow the micro to process it. Any idea how i would begin to do this? or a link to an example?
Hello! I'm implementing a virus simulation where nodes (people) in a network infect neighbor nodes, and I want to define the average number of neighbors in the network generation settings. Now I'm generating a hex grid (with avg. 6 neighbors per node), but the nodes can have a link to any other node. I thought I could start with the hex grid and then remove and add links to other nodes by some method until I reach the average (being something between 3 and 16), but my attempts have led to a biased grid / fail. As this is a quite specific problem, I couldn't find any help from any articles etc.
So, if you have any ideas how to solve or approach this problem, I would appreciate