
I think the best way of solving is using Divide and conquer/Dynamic Optimization. Just find the shortest path between the first node and the intermediate nodes using BFS until you find the first of these nodes. After that you get that found node and using BFS you look for the next intermediate node and so on until you find the last intermediate node. After that using BFS you find the path between the last visited intermediate node and the last given node.
Before all of that you can check in case your graph is not connected if the first , last and all of the intermediate nodes are in one subgraph.
Other way of solving the problem is using complete depletion algorithm. This means making all possible paths and looking for the min path which contains the intermediate nodes.
I hope this will help you
Microsoft ... the only place where VARIANT_TRUE != true





Thank you for your reply. But the problem is the order of the intermediate nodes are uncertain. We don't know which is the first and which is the next.





Thats why when you use BFS and start visiting the child nodes to the current node and check if they are one of these intermediate you will find the closest intermediate node. And from there you will determine their order.
Microsoft ... the only place where VARIANT_TRUE != true





But in some cases,the closest intermediate node can not be the first node of the path.If choose the closest intermediate node as the first node,some other intermediate node may not reach or access using a simple path.In other words,the path which we want to find should not have duplicate nodes.





Actually you can have in your path duplicate nodes. Consider the following possibility
A is your first node , B last and the P, Q, R are the intermediate, and you have the following edges
(A,P) (A,Q) (Q,R) (R,B). You cannot escape from duplicating A . Its impossible to make a path without it
If the algorithm creates path with duplication of some of the nodes that means you have similar situation. Normally using BFS you still have the visited nodes and you can filter the visited or better put them in different queue to be used only in case you don't find other path between the current 2 nodes
Node: if you filter them in some cases you wont find a path
Microsoft ... the only place where VARIANT_TRUE != true





The situation you have mentioned may indeed exist.In that case, we can confirm that no path meet the condition.If we find all the possible path between the 2 nodes using DFS or BFS and then choose the path which meet the condition,we will find that it too slow to comlpete for huge number of nodes.





You can use dijkstra's algorithm for finding the shortest distance between any two nodes. There is also one more method that is kruskal algorithm you can try that to. If you need any help I am ready to help.
______________________________________________
mobile application development company india





Image compression using half an coding or anyother.(algorithm
)





snehal122 wrote: half an coding What's that?





cod.
Use the best guess





"an coding" => "encoding"
"half" => hm... I do not know an encoding with such (or a similar) name






can u tell me the mechanism of HuffMan Encoding,thx





I am using the SplineInterpolator class in Java, and am getting some unexpected results. I think it is because of the restriction that cubic splines are required to have a continuous second derivative at each knot, but am not sure. Here is the observed peculiarity:
Lets say I have five data points, (x1, y1), (x2, y2), ... (x5, y5), and create a cubic spline interpolation between them. Then I compute another cubic spline interpolation over only the first four out of five knots.
My expectation was that the first two cubic segments (the segment between points 1 and 2, and the segment between points 2 and 3) would be identical between the two interpolations, but this is not the case. Instead, each segments has a different interpolating function.
Is there any way to compute a cubic spline interpolation that has the same interpolating function between all common knots (excluding x segments at each end, where x is some constant)? I know this is possible if I ignore the requirement that the second derivative is continuous, but then it is not technically a cubic spline. Also, if there is already a Java library that does what I am asking, please let me know! Thanks,
Sounds like somebody's got a case of the Mondays
Jeff
modified 23Jul13 17:57pm.





Just think again: that is the expected behavior when the "real" function is not a cubic function. And the interpolation is best between the two central points of the input data.
The underlying forumla is a*x^3+b*x^2+c*x+d=y. Enter 4 xvalues with the corresponding yvalues, and calculate a,b,c,d.





I understand that, but given the exact same data points, shouldn't the cubic interpolation of those points always be the same? I think the problem is the additional restriction that the second derivative be continuous between each cubic interpolation. Without such a restriction, I could compute a cubic interpolation between each pair using a slope determined from the neighboring data points (e.g., each piece of the interpolating method would be defined ONLY by the two points involved, and the two points surrounding those two points). However, such an approach would not have a continuous second derivative. In order to accomplish such a goal, I think the algorithm propagates the second derivatives through to match at each endpoint, instead of the first derivatives, which results in slightly different solutions when provided a consecutive subset of the original data points.
Hopefully this attempts to clarify the problem (I am not fitting a single polynomial to two different data sets; I am applying a 3rd degree polynomial between each pair of consecutive data points).
Sounds like somebody's got a case of the Mondays
Jeff





No. You are confusing the task of determining the coefficients of a onedimensional cubic polynomial from 4 data points with the task of interpolating an arbitrary number of 3D points with a piecewise polynomial cubic spline curve.
The latter creates one new spline segment for each additional point beyond the first. So if you have 4 points, you get three segments, not one!





The only way to solve your requirement is a polygon: By your own requirement, a spline over N points should be equal to the joining of the N1 splines you get when creating splines for two consecutive points in the sequence. Since with two points, the resulting spline curve is always a line segment, the full spline will be a polygon.
The moment you introduce continuity conditions for the first or higher derivative, the spline needs to look ahead to a point or points beyond its scope, so it can determine what its derivatives must evaluate to at its end points.





For anyone else looking at this, I figured out the answer. A cubic spline over N+1 points is solving for the 4 coeffients for each of the N cubic spline segments. This implies that you need exactly 4N equations to compute a unique spline over the data points (since you have 4N unknowns).
A cubic spline uses the following set of 4N2 equations to compute the 4N cubic coefficients (the other two equations are boundary conditions, usually f''_{0}(x_{0}) = f''_{N}(x_{N}) = 0):
1) f_{n}(x_{n}) = y_{n} (N equations)
2) f_{n}(x_{n+1}) = y_{n+1} (N equations)
3) f'_{n}(x_{n+1}) = f'_{n+1}(x_{n+1}) (N1 equations)
4) f''_{n}(x_{n+1}) = f''_{n+1}(x_{n+1}) (N1 equations)
As you can see by the above equations, the spline is not matched to meet a specified slope at each point, but instead only guarantees that two consecutive segments will have a continuous derivative (and second derivative). Because of this, I had to use cubic interpolation between each set of points using the equation given as Equation 1 on the Wikipedia Cubic Spline[^] page. This results in a piecewise interpolation that has a continuous first derivative, but does not have a continuous second derivative. In essence, this changes equalities 3 and 4 from above to be:
3) f'_{n}(x_{n}) = k_{n} (N equations)
4) f'_{n}(x_{n+1}) = k_{n+1} (N equations)
Not a spline, but it consistently fits any consecutive subset of a dataset with the exact same cubic polynomial interpolations. I use the data points adjacent to those defining a segment's endpoints to estimate the slope at the segment endpoints.
Sounds like somebody's got a case of the Mondays
Jeff





I am looking for a statistical method or algorithm to analyse change over time and to determine whether any value is outside of a range and is a false positive or aoutlier.
The context is that I have webcams which monitor movement.
The movement returned is as a percentage of change between frames.
Occasionally a cloud covers the sun or a shadow falls across the webcam image which will cause a spike in percentage as change reported.
A normal percentage being returned may fluctuate with a range of change over 0.5 seconds of 10%.
A sudden change in lighting would change suddenly the percentage by say 60% and I would consider this to be a false positive.
So what I am looking for is some way of analysing change over time and catching the false positives rather than reporting them as movement.
Any pointers or links much appreciated.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens





I'm not sure if this is what you are after, but there was an article I read and tried out successfully here on CP that would track movement of objects captured with a webcam with something called Haar Cascade. I'm sorry that I can't remember the name of the article, but you should try a search of the CP articles with suitable searchterms.
I'm on a mobile device right now and can only say that using CP's search facilities from that is, hummm let's say it friendly, suboptimal.
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian





Thanks  I will look into that.
I also decided to gather some of the data showing the movement level detected over time.
I did this to see if I could determine any pattern that was present when a spike in light took place.
What I noticed, and I know this is obvious, was that a light spike showed as a sudden increase over a short period of time.
I learnt the often learnt lesson which is to look at the data first rather than base my plan of action on a a hypothesis.
So my current plan is to look at the differential over time and spot any spikes in this differential.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens





hello
I want to find a fast algorithm for computing 1/d , where d is double ( albeit it can be converted to integer) what is the best algorithm for this case of many algorithms(SRT , goldschmith,newton raphson, ...)?I'm writing my program in c language.
thanks in advance.





It depends. On the quality of the implementation, the number of bits that you care about, the platform, and even the compiler. There's no simple answer. You'll just have to try them  unless you tell more about the intended target etc.
Some more options might be:
 Avoiding the division altogether. Calculate with fractions.
 Using hardwaresupplied division, if it exists. On some processors it's not half bad.
 Using hardwaresupplied approximate reciprocal, optionally with refinement step. x86 has had RCPSS[^] for a while now, it's a lot faster than actually dividing, even if you do a refinement step. 3DNow! actually had better support for fast reciprocals but it's AMD specific and obsolete.
 Using crazy IEEEformatspecific tricks such as this[^]





I,m writing a bit of a large programme and within it there is some divisions, the programme must be implemented on fpga and in fpga there is no division.so I must write a fast algorithm for all divisions , I don't have any access to hardware and I must implement it on software level.
I have read some algorithm from here , but I don't know which one to use.



