|
Why do you think his moniker starts with "Pita"?
|
|
|
|
|
|
High there could someone please help me with providing a couple of algorithms.
I am using a csv file of cities each of which I have an x,y coordinate.
1) A "Local Search" solution
2) A simple as possible "Evolutionary Algorithm"
Apologies I initially put genetic instead of Evolutionary.
Many thanks in advance for reading.
modified 18-Mar-20 15:27pm.
|
|
|
|
|
|
Thank you for your input.
|
|
|
|
|
|
The problem is NP-complete, which basically means there is no known algorithm other than to search the O(n!) paths for the optimal solution. Wikipedia[^] has an article with more information.
|
|
|
|
|
Thank you for your input.
|
|
|
|
|
Hello, I would like to ask for some hint about a problem that I am trying to solve.
I have 3 cars that have to "explore" a map, I discretized the map with a graph.
So now the problem is that I want to find a path, to visit all the nodes in the graph (the graph is very sparse with more or less 200 nodes) with 3 agents "exploring" in parallel.
So I tried to formulate it with a vehicle routing problem (the equivalent of tsp but with more agents). To solve the VRP I implemented a tabu search.
Problem is: it perform very poorly because a VRP (or even a TSP) problem with 200 nodes have a solution space too large
So I was wondering if someone could suggest another approach.
The problem, in short, is "visit all nodes of a graph, along the shortest path possible", passing more than 1 time on the same node is allowed, but of course not optimal,
And yhea would be nice to have something that makes "easy" to split the "path" in n subpath since I have more than one agent that can explore at the same time
you could imagine the problem as N cleaning robots that want to clean the floor, trying to clean it all, without overlapping.
I don't need the optimal solution, just a "good one" that's why I tried with tabu search.
I will be thankful for any suggestions!
Edit: I would like to add some extra notes:
- The tabu search that I implemented, for each solution, generate 500 neighbours (randomly permuting 2 nodes in the vector "node to visit"), I search for the best neighbor, an I store it in the tabu list. The tabu-list contains up to 10'000 solutions, and I ran 100'000 iterations.
It took 12h and the solution is something like 10 times worse then optimality.
- sadly, I am not allowed to formulate the problem with linear programming, because apparently it would be "too easy". (It doesn't depend on me)
- I know that there's a solution that involves creating a minimum spanning tree from the graph, and just follow it, but I would like to try something more advanced than this :/
|
|
|
|
|
Member 14732552 wrote: sadly, I am not allowed to formulate the problem with linear programming, because apparently it would be "too easy". (It doesn't depend on me) Does that mean you can't use LP at all, or just that you can't simply formulate the whole thing as a big ILP model and make eg Gurobi or GLPK solve the whole thing? I ask this because if you can use LP as part of a bigger solution, you could still use it as a very effective heuristic to base a Branch and Bound algorithm on.
|
|
|
|
|
Well maybe I could use it if it is part of something bigger, could you explain more how ti B&B works with LP? (Or also give me some reference)
|
|
|
|
|
It's on wikipedia as well. The basic idea is to recursively construct all solutions, but at every step down the recursion tree also optimistically estimate (using eg LP) how good the best possible solution in this sub-tree could be. If the estimate is worse than the best solution found thus far, there is no point exploring that sub-tree, and that lets you skip a (usually) huge amount of exploration.
With LP based estimations, it can also easily happen that it actually gives an integer solution (that doesn't tend to happen early on, but it does tend to happen before the bottom of the search tree is reached) and in that case it would be the actual best solution in this sub-tree.
There is an ILP formulation of VRP on the wikipedia page of VRP, dropping the integrality constraint turns it into an LP formulation suitable for such estimates. Some extras that strenthen the linear relaxation are also mentioned. Using just the "basic" formulation works, but the estimates are not very good then.
|
|
|
|
|
Mmh that's very interesting, thanks a lot!
So if I understood correctly I explore all solutions with recursion, using B&B to cut the branch that would lead (up to a gentle estimation) to solutions that are worse than "so far optimal".
Plus when I am deep enough i "freeze" the solution found so far and I use LP to solve what's left.
Is that right?
Do you think this could be solved in less than 5 minutes considering that the size of the graph is
~250 nodes?
And if I just use the B&B approach without the linear programming (which I am not 100% familiar) do you think this approach could work as well?
Would It make sense to create a heuristic that makes explore first "hypothetically better solutions"? (not sure if I am able to do it)
I am afraid that after the failure with the tabu search I could implement this and end up with something-not-working because of the size of the problem
|
|
|
|
|
Member 14732552 wrote: Plus when I am deep enough i "freeze" the solution found so far and I use LP to solve what's left. I wouldn't put it like that, it's more that the LP solution naturally tends to become integral at some point (meaning it's a "real solution", not just an estimate) and then you can use it directly. It's just something that happens automatically and you can use it as a shortcut when it does.
Member 14732552 wrote: Do you think this could be solved in less than 5 minutes considering that the size of the graph is ~250 nodes? IDK, I've solved TSP of that size and a bit faster. But VRP is a bit different. For both of them goes, how fast it is depends a lot on how good the estimates are. There are many advanced techniques to improve the basic LP, mostly techniques that look at a fractional solution and then generate a "cut" that adds a constraint to the LP such that it brings the new optimal solution closer to what the integer solution would be. Gomory cuts can be used, but the really high quality stuff is specific to the problem.
Member 14732552 wrote: use the B&B approach without the linear programming You can do that, you just need some optimistic estimate. It doesn't matter how you get it, but it should be optimistic: a pessimistic estimate (eg doing a quick greedy search or whatever) would mean that the sub-tree with the optimal solution in it might be skipped because the estimate said the sub-tree is bad.
Member 14732552 wrote: Would It make sense to create a heuristic that makes explore first "hypothetically better solutions"? (not sure if I am able to do it) There is a lot of freedom in the B&B framework. Nodes can be explored in basically whatever order, you can order the variables however you want (with an LP based estimate, an interesting strategy is picking a variable to branch on that the LP solution was "least sure about" - closest to 0.5 - rather than a variable that was close to 0 or 1), you can dynamically change the strategies even.
|
|
|
|
|
Thank you very much, I will try this approach!
|
|
|
|
|
I have tried similar problem using Genetic Algorithm and the results were astonishing. I think trying Genetic Algorithm may be wonderful for you too in speeding the search time.
|
|
|
|
|
I'm confused about recursion in mergesort. I've tried putting comments in the code that track the variables so its easier to understand what's going on, but I'm still struggling.
For an array of size 4, I understand the initial process -
mergesort(o,3).
0 is less than 3, so we find the middle value = 1. Low = 0, Middle,1 and High = 3
mergesort(0,1)
0 is less than 1, so we find the middle value = 0. Low = 0, Middle = 0, High 1
0 is not < 0, so the function is called again but with (middle +1 and high) as its parameters.
1 is not < 1, so the function merge is called with the parameters (0,0,1).
So far so good !
But when the recursive function kicks in again, this time it has the values (0,2,3) which has been passed to the second recursive function (middle +1, high) How did this happen ?? Is it because the second recursive function is using the same parameters as the first function on its second iteration? (0,1,3).
I've been looking all over for this, and lots of people just seem to skim over this step. I'm new, so would really appreciate any advice anyone has to give. I've tried experimenting with recursive functions, like finding the nth term in fib series, and sum of triangular numbers, which I can do now, but it doesnt seem to be helping in solving this problem.
Big thanks
public void mergesort (int low, int high) {
sum ++;
System.out.println("Round " + sum + ". Parameters being passed in = " + low + " " + middle + " " +high);
if (low<high) {
int middle = low + (high - low)/2;
System.out.println("Checking left ---- if " + low + " is < " + high);
System.out.println("Current value of high = " + high);
System.out.println("Current value of mid = " + middle);
System.out.println("low, mid and high =" + low + "" + middle + "" + high);
System.out.println();
mergesort(low,middle);
System.out.println("low, mid and high =" + low + "" + middle + "" + high);
mergesort(middle +1,high);
System.out.println("Checking right ---- if " + (middle +1) + " is < " + high);
System.out.println("low, mid and high =" + low + "" + middle + "" + high);
merge(low,middle,high);
}
|
|
|
|
|
If helps to view recursion in terms of a stack of work items. For example, you start with a work item:
mergesort(0, 3)
This is popped off the stack and executed, giving:
mergesort(0, 1)
mergesort(2, 3)
The top entry is popped off the stack, and executed, giving
mergesort(0, 0)
mergesort(1, 1)
mergesort(2, 3)
No work needs be performed for the first item (0, 0)
No work needs be performed for the second item (1, 1)
==> merge(0, 0, 1)
We now have mergesort(2, 3) left on the stack, which is precisely what you see.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Hi Daniel,
I don't understand your third call of mergesort. The first time it calls (0,3) which is the first and last index of the array.
The second time it calls (0,1), which is the first half, or first two indexes in the list.
But then why is it going to (2,3) ?
Surely as 0<1, the function would give (0,0) at which point it would exit finding the left half as the base condition has been met ?
I've read it then retains the value of (0,1), which is then passed into the second instance of mergesort (mid +1,high), to give (1,1), which means the left and right sides are both ready to be merged ?
I'll try to think of it more in terms of a stack, thanks for the pointer on that. I know how to program stacks, so I might try something this afternoon to see if I can get a full picture of whats going on underneath the recursive process,
Thanks for your help 
|
|
|
|
|
If you move your print statements out of the if (low < high) { … } block, you will see the extra calls to mergesort() that I added. In the cases of mergesort(0, 0) and mergesort(1, 1) you don't ender the if() block, so you don't see them at present.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Thank you, I will try this today
|
|
|
|
|
How do I convert ANY number to negative representation? I'm trying this in Excel.
To converting a positive number X, I just multiply with -1 & it gets converted to X * -1 = -X.
But what if I a number that is negative already, and the same formula should work seamlessly.
For example if it's -X & I multiply with -1 it gets converted to +X. Like -X * -1 = X.
which I don't want.
On a list containing both negative & positive values, I want this formula to have an impact only with the negative numbers.
modified 19-Feb-20 9:34am.
|
|
|
|
|
=IF(X>0,-X,X) ' where X is the cell reference
|
|
|
|
|
Richard MacCutchan wrote: =IF(X>0,-X,X)
Wow, that's sweet. 
|
|
|
|
|
Excel functions generally are.
|
|
|
|
|