Both have the same aspect ratio. Theoretically, I could calculate square edges and build a grid of points. And after, extrapolate each pattern pixel to approximate destination on mask based on parent square edges.
But maybe there is more elegant way to do it? With openCV or anything of computer vision algorithms?
I have n points in the plane and I need to find a pair of concentric circles such that all the points lie between the cicels and the area between the cicels is minimal.
I have an O(n^2) algorithm but it seems that it's possible to improve.
How did you even do it? The best I could do so far is using this QCQP
minimize 2 d r + d^2
d >= 0
r >= 0for each point x,y:
(x-cx)^2 + (y-cy)^2 >= r
(x-cx)^2 + (y-cy)^2 <= r+d
The point constraints work out to something like cx^2 + cy^2 - 2xcx - 2ycy - r + x^2 + y^2 >= 0 (with all the squared terms on the left, linear variables in the middle and finally some constants). The square terms are all non-negative so the matrixes that define them are positive semi-definite, so it can be solved with SDP.
Hey im having trouble in finishing this algo:
A robot engine warms up with his trip and cooled down by a fan.
The robot required to ensure that the engine is not overheating.
Given that the temperature versus time engine is
Temperature of the enviroment is 15=Te
It i1s known that the rate of the cooling is define by
dT/dt = -k(T-Te)+Tm
Also known that the speed of the robot is 25m/s And the starting temparture of the engine is 35.
We need to find the Differential Equations for the road founded in the last question for Each Δt=0.1
This is the answers that i dont know how to get to.
And when its getting warmer:
An algorithm presents a method to do something which ideally works for all manner of inputs. As you know, there are lots of different sorting algorithms, but bearing in mind they are all meant to do the same thing, why so many?
It's to do with the starting condition. If a list is sorted already when you give it to a sorting algorithm, it doesn't have to do any sorting but merely determine that it is already sorted and nothing needs to be done. You can do this by making sure each item is bigger (or equal) to the last. This would be a best case scenario, but each of your N items in the list need to be checked so you end up with O(n) - this means linear performance. A list twice as long takes twice as long.
The worst possible case is when the list is sorted the wrong way around. In this case there are more items, and each item has to bubble up further so you end up with something like n * (n/2) required ops which becomes O(n*n). Each operation takes time so number of ops/time are the same thing here. Double the size and it takes 4 times as long.
In the space complexity O(1) means constant. No new space is required as the sort is contained within the datastructure.
In your case with 25 elements, best case means checking those 25 elements are in order. Worse case is bubbling each the furthest possible distance. A randomized list will be somewhere in between the extremes.
f(n) ∈ O(g(n)) iff there exists an n0 and c such that for all n > n0, f(n) < c g(n)
This quite technical definition means the whole big O deal is really a statement about membership of a function in some set of functions that, informally, all "grow about the same". But they only have to start growing about the same after some point n0 (which you don't know) and an arbitrary constant is hidden.
For time complexity that function we make statements about is the function that counts how many "elementary operations" some algorithm takes a function of the size of its input. In order to do that, one must agree about what kind of operations are elementary. There are several models for that, the "usual one" has the at first sight odd property that mathematical operations on integers of size O(log n) run in constant time. This is necessary to prevent the following problem: suppose you could only do operations on constant-length integers in constant time, and you want to manipulate an index into an array. That array has length n, so the index must have size (you guessed it) log n. It would therefore take non-constant space to even just have an index, and non-constant time to do anything with an index, which usually no one wants.
It should be clear from the n0 that big O notation explicitly says nothing about the behaviour of the function at any particular constant. It also says nothing about what happens when you double that constant, a mistake that is commonly made, leading to questions such as "it took x milliseconds for an input of size 10, how long will it take for an input of size 20" - you cannot know based on some big O.
Let us suppose a simple array contains 25 elements
So it should be clear by now that you can't say much about that situation. Bubble sort (and literally anything else) is going to take some constant number of elementary operations (aka "time") on that input, because there is no variable to vary. You cannot compute the number of operations based on the big O (but based on detailed knowledge of the algorithm, you can).
Let us suppose a simple array contains 25 elements. What does above information mean and how should I visualize it??
Nothing ! it don't work that way.
Say that you just measured runtime it takes for 25 elements for best case and worst case.
The Big O notation allow you to evaluate what it takes for 250 elements or 2500 elements.
Best case is O(n) means that 10 times the data will take 10 time the runtime and 100 times the data will take 100 time the runtime.
Worst case is O(n^2) means that 10 times the data will take 100 time the runtime and 100 times the data will take 10000 time the runtime.
I am currently working on a procedurally generated game, and vaguely remember having read a book/paper/article in which it was said that there exists a published algorithm which can generate high quality game board games with a random set of rules, which are as good as human designed game board games. The algorithm was NOT limited to a single game such as Sudoku or Crossword puzzles, it rather would generate a set of rules for the players to play by, so it could potentially generate tic tac toe, checkers or 4 in-a-row wins and variations of all 3 games.
I think the article was published between 1980's and 2006ish
and that it was published in a journal.
Sadly I cannot remember where I read the article, only that it must've been in the last 8 months. If you know about this algorithm or something similar to it, please post.
Hi i have this Algorithm, could anyone help me figure out the answer to it, and how?
Create a recurrence function to represent the following algorithm's time complexity and use the master method to analyse the following algorithm:
/* The function f(x) is unimodal over the range [min, max] and can be evaluated in Θ(1) time
* ε > 0 is the precision of the algorithm, typically ε is very small
* max > min
* n = (max - min)/ε, n > 0, where n is the problem size */
if ((max - min) < ε)
return (max - min)/2 // return the answer
leftThird = (2 * min + max) / 3 // represents the point which is 1/3 of the way from min to max
rightThird = (min + 2 * max) / 3 // represents the point which is 2/3 of the way from min to max
if (f(leftThird) < f(rightThird))
return Algorithm(leftThird, max) // look for the answer in the interval between leftThird and max
return Algorithm(min, rightThird) // look for the answer in the interval between min and rightThird
If you are familiar with idle game or incremental games like Adventure Capitalist or Clicker Heroes you know that after a while you are dealing with very large numbers. If you are not aware basically you earn points in these games which you invest so that you can earn more points even quicker, rinse and repeat.
I wrote a small game of this style myself but didn't leave the range that a double couldn't handle but it did get me thinking on how to deal with extraordinarily large numbers.
Just to play around I figured a quick and easy way was to just keep a list with ints, each index represents a higher power and you can easily go higher if needed. index zero keeps numbers between 0-999 and index 1 then starts at 10^3. So index 5 would be ^7 etc.
Except for numbers 0-999 if you want to increase the number I have a function where you specify what number 1-9 you want to add and to which power. If you add say 9^5 to 3^5 it sorts this out by becoming 2^5 and 1^6.
So far I haven't accounted for subtraction yet in my little project but that should be fairly easy in a similar manner.
I think this shouldn't be too computationally heavy unless if used to run addition between several large lists.
Should I want to use this in a game I think it could also be easily adapted to use some approximation to reduce unnecessary work. For example if you have an entity in the game which adds 10^15 points per second and one that adds 10^3 and you show 5 decimals instead of doing all the background computations of adding those smaller numbers every second you just approximate how many seconds before they add as much so that it shows and then go from there if I made myself clear.
publicvoid AddNumber(intvalue, int power) // 10^3 = power1, 10^4 = power 2, 10^5 = power 3 etc.
if (power == 0)
NumberContainer += value;
while (NumberContainer >= 1000)
NumberContainer -= 1000;
} elseif(power < mPower)
NumberContainer[power] += value;
while( NumberContainer[power] >= 10)
NumberContainer[power] -= 10;
AddNumber(1, power + 1);
//power too large
publicstatic NumberControl operator +(NumberControl c1, NumberControl c2)
//set new maxpower to largest of the twoint mMaxp = c1.mPower;
if (c2.mPower > mMaxp) mMaxp = c2.mPower;
NumberControl nc = new NumberControl(mMaxp);
for(int i = 0;i < c1.NumberContainer.Count;i++)
for (int i = 0; i < c2.NumberContainer.Count; i++)
Here is how I add numbers. Thinking about this really got me wondering about how others would choose to solve it instead.
Number container is just a list<int>.
I'm planning on profile how well it handles a few different scenarios, for example if you have a list that goes to the power of 100 and have say 80 "generators" which adds a number at different power levels each second how will it will hold out compared to if I approximate those that are smaller than a certain range from the biggest number.
Reading the numbers is done by specifying how many decimals I want shown and then just walk backwards in the list and adding to a string so unless I want full precision it also works fairly well.
Maybe but then it would need a bit more polish and I'm not sure it would fit, maybe a small blog post and carry with it some test results and discussion around it.
When I started thinking about how to solve this I had a nagging suspicion that there already was something readily available but I couldn't figure out what search terms to use to find it so I turned it in to code instead. That blog post was interesting I admire how some people can go so in-depth in a subject.