|
Well I know that. When I said that, I meant "open" with regards to seeing the files in Visual Studio. I know very well, that the programming language I choose opens the files for manipulation. Visual Studio is just the IDE, for editing and managing the files.
All i'm doing at this point is trying to click on a solution file, have all the files come into Visual Studio, fix all the errors when I click Start, THEN I will begin to investigate the actual programs. I have another computer, that already has more complex set of C# files that opens in Visual Studio. I can see all the files. But it's too complicated (with hundreds of files) to use as an example.
A perfect example for what I'm trying to do at this point, would be a solution program in C# or VB, with more than one file, that creates an excel file, adds a few columns, with header name (Col1, Col2, Col3), puts a few rows of data (any types of values), does a few calculations, maybe some selecton and sorting, saves the file.
Just understand I'm a hardware guy, with software interests. I won't ever have all the technical jargon down pat. Like my wife reminds me, "you are pretty good at everything, but not real good at anything". 
|
|
|
|
|
Up to this point, what you've described could be accomplished with a CSV file; which can be imported / exported from Excel and is easier to work with (IMO).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Why not to use Visual Basic for Applications (VBA)?
|
|
|
|
|
Hello!
Given a 2D array each node in the array may or may not be "alive" based on random values.
I'm trying to find a way to connect "live" nodes in this 2D array to each other so that no isolated groups are created during the process, meaning that from any live node I can reach any other live node via the created paths.
A picture to visualize the problem.[^]
Can someone point me in the right direction?
|
|
|
|
|
Hi,
You are not really giving enough information about the problem you are trying to solve. The screenshot implies that there are impassable nodes.
Member 15615426 wrote: Can someone point me in the right direction? If you are trying to find the Eulerian path check out Fleury's algorithm as that might be what you're are looking for.
|
|
|
|
|
You can use BFS or DFS to determine whether or not there is a path. The graph is not required to execute the bfs, but the matrix itself will be utilized as one. Begin the traversal from the upper right corner, and if there is a method to the lower right corner, there is a path.
|
|
|
|
|
Hello!
How really virtual sports games works ?
I know it use a random number generator to simulate the action and determine the outcome but I would like to know the whole mechanism works. How advance these algorithms are ? When a computer is functioning correctly, nothing it does is random so results of games can't be totally random, am I wrong ?
|
|
|
|
|
The "random number generator" (i.e. dice, before computers), and the probabilities of a certain event happening are what determine the outcome.
There are always a number of possibilities: winning, losing, completely defeated, routed, etc.; based on the scenario.
Based on other factors (numerical superiority, morale, condition, prior movements, weather, etc) the probability of a certain outcome is predicted (using prior history) and a possible result and consequence is assigned. The probably of a certain (dice) throw is then matched to the relative probability of the events to predict. One can use as many "dice" (random numbers) as are needed to cover the spread.
The throw is made, the number is matched to the probability assigned to a given outcome, and you have a result.
The key is that the probability of a given event varies with the factors in effect at that given time. You obviously need to capture (or simulate) all those factors in order to make a realistic "game".
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
How to mathematically solve the recurrence relations of the following form :
1. [T(n)=(2^n)T(n/2) + n^n][1]
2. [T(n)=4T(n/2) + n^(2)logn][2]
Is there a generic method to solve these?
I realize that master theorem is not applicable on these forms because in 1, 2^n is not a constant and 2 does not fall into any of the 3 cases of the master theorem.
[1]: https:
[2]: https:
|
|
|
|
|
Hi,
Dawood Ahmad 2021 wrote: Is there a generic method to solve these?
There are three ways to solve it.
1.) Masters theorem
2.) Substitution
3.) Recurrence tree
As you pointed out you cannot use Masters so you should investigate the other two methods.
|
|
|
|
|
I posted a question about this several months ago, but I didn't explain it very well and then I got sidetracked by some family emergencies. Now I need to get this done, so would appreciate any help.
Here's the situation. About a year ago, while cleaning out the attic, I found a box of old LP record albums. I haven't had a turntable for 20 years and, figuring that no one listens to LPs anymore, I was about to toss them. Before I did, I posted a message on our neighborhood email group. I got a ton of responses offering to take them and pleading with me not to toss them. I've always had a tendency to leap and then look, so I posted another note offering to collect any other unwanted albums anyone else might have in their attic and then have a yard or garage event to distribute them to those who want them. I got another surprising response. More than a dozen people responded and brought over albums. I now have over 1,000 albums in about 25 banker boxes all over my home office.
I need to get these distributed so I can use that space for other junk. 😉
The yard event is out. I have at least 20 "takers" so far. There is no way that I can find a time when they are all available and a couple of them are relatives of neighbors who are not in town. Plus, with 1,000 albums, people would be thumbing through the boxes forever. Much better to have a list that people can peruse at their leisure.
I decided to use this opportunity to polish up my database skills and come up with a way to allow each taker to select the albums they want online and then come up with an algorithm to allocate them fairly.
I have made some progress. I have the albums in an Access database. I have a Google Sheet derived from that database. I have a way to share that with the neighbors. They can mark the albums that they want. All I need now is the allocation algorithm.
My first question is how to set up the Google Sheet. Should I put a checkbox next to each album so that they check the albums they want? I don’t really like this option, because the albums are not equal. Each neighbor will have a different priority for different albums. I need way for them to indicate a preference of one album over another.
Next, I thought about letting them rearrange the albums in priority order or give each one a priority number. But there are 1,000 albums. I think that would be difficult to manage.
My current thinking is to define a limited number of priorities, like 5 or 10. There would be a box next to each album into which each neighbor could enter a number from 1-5 or 1-10. Then I would develop an algorithm to allocate the #1s as fairly as possible, then the #2s, etc.
It soon occurred to me that I’ll need to limit the number of albums each neighbor can select in each priority. If one neighbor marked all 1,000 albums priority 1 and I allocate them first, no one would get any of their lower priority albums. There are 1,000 albums and about 25 takers, so a limit of 10 albums in each priority level should allow each neighbor to get some good albums without taking them all away from those who made them their #2 or #3 choice.
I also plan to allow each neighbor to indicate that they will take any albums left over and not taken by anyone. My plan is to give away all of them.
Now for the actual allocation. I see several possibilities. I could first allocate all of the albums that were selected by just 1 person. This seems like a good start.
Now do I allocate the contested albums by priority or not? That is, do I allocate the priority 1 albums, then the priority 2, and so on? Or do I allocate them all in one loop?
I could sort the albums by the number of takers, then allocate them in that order starting with the 1-taker albums. I would then use a weighting mechanism to give priority to the neighbor that rated it #1 over one who rated it lower.
I also plan to weight the takers by the number of albums each already allocated. This will help ensure that each neighbor gets their fair share. Should I count all of the albums allocated so far including those that were not contested? On the one hand, it will help even out the total overall allocation. On the other hand, it seems unfair to punish someone for claiming an album that no one else wanted.
That’s as far as I have gotten so far. I’m working on a flowchart. I’ll post it when it’s ready.
Thanks for any suggestions.
|
|
|
|
|
1 - I would make people prioritize the records they want - only 1 number, let's say, from 1 to 100.
2 - They would then choose the records they want, in the rank they want. Not allowed to repeat numbers.
With that information, I would assign a priority value for each ranking, the higher the priority (1 is highest), the higher this number is. The balancing of these values is going to be a big part on your "justice".
I would do a cumulative priority number, and upon collision, the person that asked first for the record would get it.
Let's review this, in a slower way:
Let's say you use 1 => 400, 2 => 200, 3 => 100, 4 => 80, 5 => 70, 6 => 60
Now we have the first choice round. Everyone gets 400 priority points. Let's say 5 people.
But there is a collision. 2 people chose MC Hammer record.
The one that asked for it before is going to get it. The other one... well, tough luck.
Now, we have the second choice round. Everyone gets 200 priority points.
But, the "un-hamnmered" guy now has 600 priority points. Whatever he chose for choice 2, if available, is going to get picked.
After that, everyone else can get their choice of number 2, if available.
Just repeat until no more records. This should be a fairly fair system - everyone is probably going to be equally pissed... lol... You are seriously underestimating human beings' capacity for being petty - I hope all goes well, but I really think you're going to have some headache.
|
|
|
|
|
I like your idea of ppl who ask for unique, uncontested works redeive them, but it should count towards their “credits” or anti-credits. One person could get 20 unique albums. Someone else might get 5. If they both want the same album, I would tie break to the one that had the lowest count.
So eventually everyone would even up to 20.
If they have the same number of albums allocated, then tie break randomly.
Same applies to 3,4,5, etc way tie. Whoever has the least number wins or random break if multiple match.
After you run some scenarios, you might try and generate a “win” score for each person that shows how many contested albums they received separately from how many uncontested albums they received. That should give a good feel of how fair that pass was.
You can publish the distribution ahead of time, and let the people decide if they want to trade with each other.
Finally, put the blame on the program that “your friend” wrote.
You have no idea how it works!
|
|
|
|
|
Reference counting enable automated releasing memory if no cycles. Users who use smartpointers in C++ or Swift language must be carefully and proper use weak pointers.
I think about extend smartpointers to manage cycles and start new language which can use this algorithm.
I don’t know if algorithm is correct for all possible graphs and graph forest (set) and all possible adding, removing edges in runtime.
Preview this algorithm
Block must have:
- standard ref count (use count)
- weak count
- outgoing count
- link_number
In other hand pointers have standard with unlike fat(double) smartpointers in C++.
For further info, current version biggest method
https:
main is https:
Is possible proof correctness this algorithm or find leaks?
|
|
|
|
|
If you want to try this, I would suggest implementing it in C++ instead of designing a new language around it, which is a much bigger undertaking.
My view is that I don't want to rely on a garbage collector. They tend to periodically monopolize the CPU time in a system with a lot of objects. But because memory leaks do occur, I favor object pools with a background garbage collector that recovers the memory for an object that was not properly deleted. You can read about it in this article[^].
|
|
|
|
|
You can do a test with a circular reference.
A => B => C => A
So, if I have an initial pointer to A object, because of the construct, it is going to have a count of 2 (because of C). The need for the Garbage Collector is that if you remove the initial pointer to A (either in the Stack or Heap), A's reference count only decreases to 1, and because of C it never goes to 0. Is your code prepared to deal with that?
The problem here with reference counting is that one above; it doesn't count how many "access paths" you have to an object, just the number of references you have to it.
|
|
|
|
|
Yes,even more complicated but no proof that all.
|
|
|
|
|
I'm practicing my dynamic programming skills and am having trouble coming up with a solution to this problem: Suppose there is a path with n positions. Consider the set S = 1. This is solved.
modified 19-Mar-22 19:40pm.
|
|
|
|
|
What real world problem are you trying to solve?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
There's no real-world problem, it's just a problem haha. Any ideas on what a correct dynamic programming approach would look like?
|
|
|
|
|
I have some ideas, but your professor might not like me giving you the answer.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Oh, this isn't actually for school/homework. Given this, would you be willing to share your ideas?
|
|
|
|
|
"Dynamic programming" does not preclude you from iterating over every possible solution (which is limited in this case) in coming up with the best solution. Turning "big problems" into smaller ones to tackle.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Right, but in this case, aren't there a lot of possible solutions? For example, if we had S1 = {A,C}, S2={A, B}, S3={A,B,C}, S4={A,C}, S5={A,B,C} then there would be 2*2*3*2*3=72 possible solutions, and this grows quickly as the number of subsets grows. I figure you're imagining a different approach. In what ways can I limit the possible solutions that I consider?
|
|
|
|
|
72 iterations is trivial. Even thousands ... that's the "dynamic" part.
You haven't "bounded the problem". You can't go through life with "Yeah, but ...".
Incomplete specs lead to incomplete solutions. And, most of the time, you benchmark first; then ask the question: is it good enough?
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|