I remember I used to do such algorithms in the university time... but that was 20 years ago lol
instead of inventing the wheel, I'm sure there is an efficient algorithm to do the following:
I have an array of values. size is not a matter... usually it is no more than 30-40 in size.
I need to split the array into sub groups, in a way that each subgroup is most efficiently close to value X (in our case 20). i.e smallest amount of groups with the highest value closest to X=20.
(in real world - we have a roll of material, length is 20 meter... and customers asking for material cutted in sized a,b,c,d.... and we want to cut it efficiently without losing material)
I have been working on uploading a .bin file to a micro with basically a hello world project inside. Although i find that it could definitely be faster when uploading the contents of the file to the micros ram, where it then copies it to flash.
Where this data is transferred is located in the code below:
for (int i = 0; i < Page.numPages; i++)
ComPort.Write(code, start_Address_Code, 512);
ComPort.Write(code, start_Address_Code + Page.ram_Size, 512);
copy2Flash(flash_Address, "536871680", "1024");
Short of having a serial break-out box so you can monitor the signals, its difficult to know what to suggest. At 460800 baud it should take less than 15 seconds to transfer a 500kb file. What we don't know is how large a receive buffer is on the micro, and how fast it can process the data arriving there. Probably what is going on is that the micro's receive buffer gets filled, so it signals the PC to stop sending, the micro processes some data from the receive buffer and then signals the PC to start sending again.
There are software breakout boxes available if you google for it. I'd think I would grab one and take a look at the signals being generated and see if that's where your problem is.
So i think you are right, i downloaded a serial port sniffer and have determined the data from a 1024 buffer needs to be written in a queue of 45 bytes at a time to allow the micro to process it. Any idea how i would begin to do this? or a link to an example?
Hello! I'm implementing a virus simulation where nodes (people) in a network infect neighbor nodes, and I want to define the average number of neighbors in the network generation settings. Now I'm generating a hex grid (with avg. 6 neighbors per node), but the nodes can have a link to any other node. I thought I could start with the hex grid and then remove and add links to other nodes by some method until I reach the average (being something between 3 and 16), but my attempts have led to a biased grid / fail. As this is a quite specific problem, I couldn't find any help from any articles etc.
So, if you have any ideas how to solve or approach this problem, I would appreciate
Root ,V,R,C,Sub,RLcon,MCon are Containers
L1,L2,L3,L4,L6,L7 are leaf nodes
I have following data
1) root container pointer
2) root.getContainers - return list of sub containers - for example
root.getContainers will return here V,R,and M
V.getContainers will return Sub
V.getLeafs will return L1,L2,
I have a Create a Yaml Dump from Nested Ordered Dictionary
The dump of output nested dictionaries is as follows
type : leafOfV
type : leafOfV
type : Container
type : leafOfR
type : leafOfM
The algorithm has to be designed using OrederedDict of python
Can someone help me to write the correct and optimized algorithm in python
let pos1 = first_of(str)
let pos2 = last_of(str)
while ( pos1 < pos2 )
while ( isdigit(str[pos1]) AND pos1 < pos2 )
while ( isdigit(str[pos2]) and pos1 < pos2 )
# having got here, we know that either
# pos1 and pos2 are non-digits, so we can swap them
# or pos2 < pos1, which means we are done (while loop will terminate)
if ( pos1 < pos2)