As programmer, creating algorithms is your job, start as soon as possible. This skill comes with experience, the more you practice, easier it gets.
Member 15020867 wrote:
Problem: I have no idea how to begin
Just like for any new problem, take a sample input and solve by hand with a sheet of paper and a pencil.
Start again with other samples, until you device a method to solve the problem.
Then try to improve the algorithm.
The job is to calculate the degree of discontent of passengers on a bus.
Passengers get on the bus, all at the same point and say at which point they want to stop (that point is the mileage from the point where they get on the bus to where they want to disembark), but the bus can only make k stops, that is, it is a value limited number of stops. The discontent will be calculated as follows (x-y) * 2where x is the place that each passenger chose to stay and y where the bus stopped. The K will be defined after calculating the discontent of each passenger
If we do not take into account the growth of passenger dissatisfaction due to an excessively large number of stops, then in a first approximation the formula will look like this ((x - y) / k) * 2 But thisis just my opinion.
I am building a small web app that allow stamp collectors to swap their stamps. The main problem I am trying to solve is 1-on-1 (direct) swap. Since usually A needs something from B, but B not always need something from A.
But the solution is to find a circular swap, for example:
A needs from B
B needs from C
C needs from A
result: everyone is happy.
I have a list of collectors, and for each a list of stamps they got, and a list of stamp they are looking for.
I am trying to find a way, to create a circle, by matching the "have" and "want". The problem is that this may take a very long time with lots of collectors and lots of stamps .
What would be the best approach for this?
The straight forward algorithm I think (which is obviously not optimized) is to:
1) look what "B" want.
2) find who got what B want (stamp after stamp)
3) for each one, see if he (C) needs something from A
4) if yes, YAY
5) if not, see what C want (stamp after stamp)
6) for each one, see if (D) has something from A
... and so on ...
This is my first question in this forum, so suggestions for improvement are welcome.
I have a data set of arrays which contain Floats between -1 and 1 and a new array of the same format.
The arrays have a length of up to 300.
I have to find the array from the data set that has the smallest distance to the new array.
IMPORTANT: The values of the arrays must remain on their position, while it's alowed to sort the arrays in the data set.
How can I achieve this while keeping an acceptable trade-off between accuracy and scalability?
Of course I could just go through every array but that wouldn't be scalable I guess.
PS: Usually adjacent values have a similar value (so [-0.8, 0.3, -0.7, 0.9] would be very unlikely).
By identical, I assume you mean isomorphic. See here[^], which speculates that the problem may be NP-intermediate (between P and NP-complete).
Let's call the two graphs G1 and G2. On the surface, the problem seems NP-complete because you have to compare G1 and G2 after renaming the n vertices in G2 using names from G1, so there are n! combinations to try. However, some combinations can be filtered out quickly. For example, a vertex in G2 has to have the same degree (number of edges) as the one it is being compared to in G1. But if this check passes, there's still more checking to do, as in the case of the graph with 6 vertices shown in your link.
This "same name" talk is confusing. The general problem is that the two graphs have different names for their vertices, so you have to try all possible ways of mapping one graph onto the other. So even if there's there's a vertex A in G1 and a vertex A in G2, that doesn't mean that you don't have to compare A in G1 with B in G2, and so on. You'd only need to compare A to A if you're deliberately being told to test a specific mapping, ignoring all others, or when you're solving the general problem and evaluating each possible mapping in succession.
The neighbours must be the same, not just the degrees. Your example, where all the vertices have the same degree, shows why the problem is difficult. If it's an undirected graph and all the degrees are n-1, then it's easy because both must be complete graphs. If not, then let's say we had a graph with 6 vertices, all of degree 2. How can you tell whether these are two disjoint graphs (two C3's) or a connected bipartite graph? You have to go beyond the vertices' degrees to do that.
On the surface, it looks NP-complete because there are n! possible mappings, and n! is approximated by an exponential function.
Your first paragraph is correct. So is your second paragraph, but of course all combinations have to be tried before concluding that the graphs are not isomorphic.
Your example shows why the problem is difficult. You have to try all combinations, though optimizations certainly exist. Trivially, the number of vertices in both graphs has to be the same. And if you sort their vertices' degrees, the two sequences have to be the same. But like the 6-vertex graph demonstrated, that isn't enough. There are undoubtedly more optimizations, which is why the article that I linked to speculated that the problem is NP-incomplete (easier than NP-complete). But you'd have to search the net to find the details of those algorithms.
It appears that I'm more or less answering a homework question, which isn't a good idea for you.
Yes, you have to satisfy the neighbour condition (which will automatically satisfy the degree condition). n is the number of vertices in G2, but G1 must also have n vertices, otherwise the graphs couldn't possibly be isomorphic. There are n! possibilities because you need to try all possible permutations when mapping G2's vertices to G1's. Maybe you get lucky and discover that they're isomorphic when you check the first permutation, or maybe you get unlucky and have to try them all.
When you are new to this site, any of your posts might be marked as possible spam, in which case it has to be reviewed by one of the admins. It probably happened because your post contained links, and some new members post links that are spam.
This has gone beyond my expertise level. It’s been about 40 years since I last did theoretical computer science. Is the isomorphism problem NP-complete or not? Well, I would trust your second link more, since it’s actually on a theoretical CS site, which carries more weight with me than GeeksForGeeks.
However, it appears that they're talking about two different things. One form of the isomorphism problem allows the graphs to have a different number of vertices. In that case, the goal is to determine whether the larger graph has a subgraph that is an isomorph of the smaller one. This definitely seems NP-complete because you first have to pick one of the C(v1,v2) possible subgraphs before you can even check if it is isomorphic, where v1 and v2 are the number of vertices in G1 and G2 respectively. That’s what the GeeksForGeeks post is talking about, whereas the other one seems to be talking about the simpler version of the problem, where G1 and G2 have the same number of vertices.
Are you building it on top of an instruction like x86's div, which takes a double-width dividend and a regular-width divisor, and results in a quotient and remainder? Or some other, less convenient, type of division (like MIPS) which divides two equal-sized numbers by each other?
Or is it a from-scratch kind of deal, on a machine without division?
No smiley ... In the days when a CPU filled a rack of boards, and the ALU (Arithmetic/Logic Unit) alone was a least one board, maybe more, there was a machine that did that, although for floating point rather than integer. The most significant bits of the mantissas (always kept normalized, with a hidden MSB) were used as indexes into a huge 2D table in ROM, giving the 11 most significant bits. From that, a Newton iteration was done, doubling the precision for each iteration. The entire iteration was done in hardware: The initial lookup took one clock cycle, each iteration took an extra clock cycle (two for single precision, four for double precision). The final normalization of the result took yet another clock cycle.
This FP divide was so fast that the CPU didn't have any integer divide logic. It was faster to convert the integers to 64 bit FP, do the division and convert back. The FP logic alone was a circuit board about A3 size (i.e. twice the size of a standard typewriter paper) packed with chips.
For all I know, maybe modern CPUs use the same technique today. In the late 1970s, it was so remarkable that the design was presented in internationally recognized professional magazines.
If I were to write a division function for arbitrary length integers (or arbitrary precision float), I would consider seriously something in this direction. If the machine provides a division instruction, you can use that to obtain the first 'n' bits, rather than using a huge lookup table.