|
trønderen wrote: I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.
Excuse me to put my few pennies in your great discussion, however, I am afraid that in times of ALGOL-60 the "source code files" existed (only) in the punched cards! 
|
|
|
|
|
My freshman class was the last one to hand in our 'Introductory Programming' exercises (in Fortran) on punched cards. Or rather: We wrote our code in special Fortran coding forms, and these were punched by secretaries, and the card decks put in the (physical!) job queue.
The Univac 1100 mainframe did have an option for punching binary files to cards. I believe that the dump was more or less direct binary, zero being no hole, 1 a whole, 4 columns per 36 bits word. Such cards were almost 50% holes, so you would have to handle them with care! (I never used binary card dumps myself.)
The advantage of punch cards is that you had unlimited storage capacity. When we the following year switched to three 16-bit minis, full-screen editor and Pascal, we had 3 * 37 Mbyte for about a thousand freshman students. When the OS and system software had taken its share, each student had access to less than 100 kbyte on the average, and no external storage option, so we had to do frequent disk cleanups (I was a TA at the time, and the TAs did much of the computer management).
|
|
|
|
|
[edit]
I had written the response below before I noticed that other folks had already replied with similar stories.
[/edit]
trønderen wrote: I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.
That is correct - the source code was hand punched onto cards (80 column). I got quite adept with the multi-fingering buttons for each character. You then put the box of punched cards into a holding area where someone would feed them to the card reader (hopefully without dropping them and random sorting them). Then the job was run and you got a line printer listing delivered to the same holding area (where, hopefully, your card deck was also returned to) - this is the first time that you can see what the texts were that you had written. At University, the turn round time was 1/2 a day; at my first full time job it was nearer a fortnight; so computer run times were an insignificant part of the round-trip time.
Before Uni, I had to do coding sheets which were posted to the computer centre (round trip one or two weeks). This added an extra layer of jeopardy - would the cards be punched with the texts written on the coding sheets? The answer was almost invariably 'No' for at least three iterations; so the first run (enabling debugging) could be six weeks later than the date that you wrote the program.
|
|
|
|
|
jschell wrote: First, at least last time I created a compiler much less studied compilers it is not "simple" to replace keywords. You certainly must know the semantics of the abstract token. And you must know the concrete language you want to use. This is not a job for Google Translate. But we are not talking about defining a new programming language; just modifying the symbol table used by the tokenizer. That is magnitudes simpler than making an all new compiler.
jschell wrote: Second the latter part of that statement seems to suggest exactly the problem that compilers need to solve with the first part of what I said. Compilers (and interpreters) already convert key words into tokens. Those that do not are very inefficient (as I know since I had to work with one long ago.) Quite to the contrary. If you store the source code by their abstract tokens, the result of the tokenizers job (and maybe even after a fundamental structure parsing), that heavy job you are referring to is done "once and for all". Or at least until the code is edited, but then a reparsing is required only for that substructure affected by the edit. Further compilation would be significantly faster, as much of the work is already done.
We have had precompiled header files for many years. This is along the same lines, except that the pre-parsed code is the primary source representation, with a mapping back to ASCII symbols is only done for editing purposes, and only for that part of the code displayed to the programmer at the moment.
jschell wrote: Requirements, Architecture, Design are all in that natural language. All of those have much more impact on the solution than the actual code itself. In "a" natural language, yes. English. Far too often, the design language is English, even if the customer and end users have a different natural language.
The norm is that in the very first input steps from the customer, the native language is used. Then the developers retract to their ivory tower to create The Solution, as they see it. First: Then it is back to English, and second: The ties to whatever was discussed with the customer is next to non-existing. The software structure is not a more detailed break-up of the boxes you showed the customer. The modules, functions, methods, data structure names are unrelated to the terms used in the discussion with the customer. If the customer has a complaint, saying for example "We think that when handling this and that form, in this and that procedure, you are not calculating the field with the way it should be done. Can you show us how you do it?", first: Going from the functional module and the procedure in question, as identified in the discussion with the customer, to the right software module and method in the code, may be far from straightforward. Developers do it their own way; the customer presentation is only for the customer. Second: If the developers are willing to let the customer see the code (most likely they refuse!), there is a very significant risk that the customer will have a hard time recognizing his problem: The programmers do not know the professional terminology, so they mislabel terms. They have never recognized the natural breakdown of partial tasks, solving the problem in what to the customer appears as convolved ways. The programmers have never seen the real application environment, do not know how different attributes really belong together.
If the developers did not switch from a native language to English as soon as the customer is kicked out the door, far more real domain knowledge could be preserved in the solution. In the development process, the customer could provide a lot more domain knowledge, correcting and aiding the developers as they progress with the more detailed design, and onto the code design. A customer who knows the logic of the solution, even if he doesn't know the specific code statements, is able to provide a lot more helpful feedback, both in bug finding and in discussions about future extensions and additions.
I know very well that customers without a clue about computers can give very valuable input. They can discuss solution logic. They can discuss data flows and functional groups. I have done that in several projects, serving as a messenger between the end users and the development group. (And in one library project, when I put forward the wishes from the future users, the project leader sneered back at me: "F*** those librarians! ... That was his honest attitude towards the end users; they were a source of noise, nothing more.)
My attitude towards customers and end users is at the very opposite end of the scale.
|
|
|
|
|
I am working on ranking different social influencers based on a set of metrics.
Metrics collected:
• username
• categories (the niche the influencer is in)
• influencer_type
• followers
• follower grow, follower_growth_rate
• highlightReelCount, igtvVideoCount, postsCount
• avg_likes, avg_comments
• likes_comments_ratio (comments per 100 likes, use as in authentic indicator)
• engagement_rate
• authentic_engagement (the number of likes and comments that come from real people)
• post_per_week
• 1/post_like, 1/post_comment (total 12 latest posts)
• 1/igtv_likes, 1/igtv_comment (total 12 latest igtvs)
Here's how the data looks like:
Sample_data - https://drive.google.com/file/d/15obMah9pGI3CutOZMJNqfr3O95rLz2JS/view?usp=sharing
Objective: Rank the social influencers according to their influential power with the use of the metrics collected above.
There are a few ranking algorithms to choose from, which are:
a) Compute the score for influential power with Multi-Criteria Decision Making (MCDM) and rank it with regression
b) Create classification model and rank them through probability
c) Compute the score for influential power with Multi-Criteria Decision Making (MCDM) and rank it with machine learning model like SVM, Decision Tree and Deep Neural Network
d) Learning to rank algorithm like CatBoost
e) Trending algorithm
I would like to ask which algorithm above will be more suitable in this project and could you compare and provide the reasons for it? Any ideas will be much appreciated!
External links for algorithms:
1. [MCDM](https://towardsdatascience.com/ranking-algorithms-know-your-multi-criteria-decision-solving-techniques-20949198f23e)
2. [Catboost](https://catboost.ai/en/docs/concepts/python-quickstart)
3. [Trending algorithm](https://www.evanmiller.org/deriving-the-reddit-formula.html)
|
|
|
|
|
Ask each one how much money they make: the bank account algorithm.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I have a task to complete. I'm trying to learn Visual Studio. And I have another task of programatically manipulating data inside and existing Excel worksheet.
It will involve manipulating an Excel Spreadsheet. This will involve Opening, Closing, Saving, Modifying cells, etc.
I wish to also learn Visual Studio IDE. To do both, I'd like to write a simple program example (or see one) that uses Visual Studio, without any aftermarket libraries, like Panda or others.
Basically what I'd like to do is install Visual Studio and all relevantt tools that comes with it, to support this task?
I have already installed Visual Studio 2022. I also have the Community version on another computer. I think I have options on what compiler to use. That is, I think my Visual Studio comes with the usual suspects C#, C++, Python, etc. Correct me if I'm wrong.
Questions:
1) How hard is it to use just Visual Studio, and one of the included compilers to, Open, Close, Edit, Save, etc., Excel files?
2) What included compiler should I use? or that is, which is best for what I want to do?
3) Is there a good Visual Project Template, that illustrates doing what I need to do.
I might eventually use some of the add on libraries, but for now, I'd just like to keep it simple. I also realize that using something like Panda, or other adons, might actually make it easier. But the learning curve might be longer.
I just need to be pointed in the right direction, for my initial tasks.
Thanks
Mike
|
|
|
|
|
|
Thanks, Yeah, I realize that I need to know how to use whatever language I use. I'm pretty good at reverse engineering, so that's why I wanted some good examples. Mainly, at this point, I'm just trying to use Visual Studio. I'm not familar with this version. And I don't want to complicate the process by having to create and debug code to use it. Although, that might be useful. This way, if I can open a project, that is known good, all my effort will be in getting Visual Studio to run it.
I tried one of the VB Exapmple Excel projects, for instance, and right off the bat I see it's targeted to something wrong. It says it can't find the NET framework 3.5, and also one way to solve it is to install the SDK targeting package. Whatever that means. So now I'll begin to track that down. If I can successfully get this configured to run one of the examples, that is my goal accomplished for this part.
Thanks
|
|
|
|
|
Articles are generally for guidance, and will often be out of date with reference to the latest framework and version of Visual Studio. So unless you really understand what you are doing you are going to face some difficult problems.
|
|
|
|
|
Thanks,
I grabbed one from that list and opened with Visual Studio and it gave me a NET framework error. I installed the 3.5 version that it was missing and the program continued and opened and asked me for information to create the Excel spreadsheet. This is exactly what I was looking for.
Now, I'll look around on this and then when I understand the use of the IDE, I'll go find a more complex example.
Thanks
|
|
|
|
|
Excel has it's own programming language; it's called VBA.
You start by proving to yourself why you have to use C# instead.
I would "program" my Excel sheets (in VBA) to generate SQL table definitions, for example.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I understand that. I've been using Excel from version 1 (I think). But I'm not trying to learn excel. I'm trying to learn Visual Studio. It's rather obtuse, as far as I'm concerned. first started using MSVC decades ago, with Version 6. Haven't used it since.
So, I have a few goals (not one is learning Excel). I work in a test lab. Be have existing tools that spit out Excel files, when tests are completed. We then post process those. Most of the software where I work, use Visual Studio and C#.
So, I would like to learn more about C#. And get back to knowing how to use Visual Studio. That's why I was looking for good examples of using Excel, in Visual Studio. That way I have files that should be easy to open in Visual Studio.
My first task will be to be able to open one of the examples, then correct all the build errors (Net framework mismatch for instance), then execute one of the examples. At that point, then I can begin looking at the code for working with Excel files.
|
|
|
|
|
Member 13159493 wrote: looking for good examples of using Excel, in Visual Studio. You need to understand that it is not Visual Studio that you use to open Excel files, but .NET and one of its languages, e.g. C# or VB.NET. And Visual Studio is used as a support tool for .NET, not the other way round.
|
|
|
|
|
Well I know that. When I said that, I meant "open" with regards to seeing the files in Visual Studio. I know very well, that the programming language I choose opens the files for manipulation. Visual Studio is just the IDE, for editing and managing the files.
All i'm doing at this point is trying to click on a solution file, have all the files come into Visual Studio, fix all the errors when I click Start, THEN I will begin to investigate the actual programs. I have another computer, that already has more complex set of C# files that opens in Visual Studio. I can see all the files. But it's too complicated (with hundreds of files) to use as an example.
A perfect example for what I'm trying to do at this point, would be a solution program in C# or VB, with more than one file, that creates an excel file, adds a few columns, with header name (Col1, Col2, Col3), puts a few rows of data (any types of values), does a few calculations, maybe some selecton and sorting, saves the file.
Just understand I'm a hardware guy, with software interests. I won't ever have all the technical jargon down pat. Like my wife reminds me, "you are pretty good at everything, but not real good at anything". 
|
|
|
|
|
Up to this point, what you've described could be accomplished with a CSV file; which can be imported / exported from Excel and is easier to work with (IMO).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Why not to use Visual Basic for Applications (VBA)?
|
|
|
|
|
Hello!
Given a 2D array each node in the array may or may not be "alive" based on random values.
I'm trying to find a way to connect "live" nodes in this 2D array to each other so that no isolated groups are created during the process, meaning that from any live node I can reach any other live node via the created paths.
A picture to visualize the problem.[^]
Can someone point me in the right direction?
|
|
|
|
|
Hi,
You are not really giving enough information about the problem you are trying to solve. The screenshot implies that there are impassable nodes.
Member 15615426 wrote: Can someone point me in the right direction? If you are trying to find the Eulerian path check out Fleury's algorithm as that might be what you're are looking for.
|
|
|
|
|
You can use BFS or DFS to determine whether or not there is a path. The graph is not required to execute the bfs, but the matrix itself will be utilized as one. Begin the traversal from the upper right corner, and if there is a method to the lower right corner, there is a path.
|
|
|
|
|
Hello!
How really virtual sports games works ?
I know it use a random number generator to simulate the action and determine the outcome but I would like to know the whole mechanism works. How advance these algorithms are ? When a computer is functioning correctly, nothing it does is random so results of games can't be totally random, am I wrong ?
|
|
|
|
|
The "random number generator" (i.e. dice, before computers), and the probabilities of a certain event happening are what determine the outcome.
There are always a number of possibilities: winning, losing, completely defeated, routed, etc.; based on the scenario.
Based on other factors (numerical superiority, morale, condition, prior movements, weather, etc) the probability of a certain outcome is predicted (using prior history) and a possible result and consequence is assigned. The probably of a certain (dice) throw is then matched to the relative probability of the events to predict. One can use as many "dice" (random numbers) as are needed to cover the spread.
The throw is made, the number is matched to the probability assigned to a given outcome, and you have a result.
The key is that the probability of a given event varies with the factors in effect at that given time. You obviously need to capture (or simulate) all those factors in order to make a realistic "game".
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
How to mathematically solve the recurrence relations of the following form :
1. [T(n)=(2^n)T(n/2) + n^n][1]
2. [T(n)=4T(n/2) + n^(2)logn][2]
Is there a generic method to solve these?
I realize that master theorem is not applicable on these forms because in 1, 2^n is not a constant and 2 does not fall into any of the 3 cases of the master theorem.
[1]: https:
[2]: https:
|
|
|
|
|
Hi,
Dawood Ahmad 2021 wrote: Is there a generic method to solve these?
There are three ways to solve it.
1.) Masters theorem
2.) Substitution
3.) Recurrence tree
As you pointed out you cannot use Masters so you should investigate the other two methods.
|
|
|
|
|
I posted a question about this several months ago, but I didn't explain it very well and then I got sidetracked by some family emergencies. Now I need to get this done, so would appreciate any help.
Here's the situation. About a year ago, while cleaning out the attic, I found a box of old LP record albums. I haven't had a turntable for 20 years and, figuring that no one listens to LPs anymore, I was about to toss them. Before I did, I posted a message on our neighborhood email group. I got a ton of responses offering to take them and pleading with me not to toss them. I've always had a tendency to leap and then look, so I posted another note offering to collect any other unwanted albums anyone else might have in their attic and then have a yard or garage event to distribute them to those who want them. I got another surprising response. More than a dozen people responded and brought over albums. I now have over 1,000 albums in about 25 banker boxes all over my home office.
I need to get these distributed so I can use that space for other junk. 😉
The yard event is out. I have at least 20 "takers" so far. There is no way that I can find a time when they are all available and a couple of them are relatives of neighbors who are not in town. Plus, with 1,000 albums, people would be thumbing through the boxes forever. Much better to have a list that people can peruse at their leisure.
I decided to use this opportunity to polish up my database skills and come up with a way to allow each taker to select the albums they want online and then come up with an algorithm to allocate them fairly.
I have made some progress. I have the albums in an Access database. I have a Google Sheet derived from that database. I have a way to share that with the neighbors. They can mark the albums that they want. All I need now is the allocation algorithm.
My first question is how to set up the Google Sheet. Should I put a checkbox next to each album so that they check the albums they want? I don’t really like this option, because the albums are not equal. Each neighbor will have a different priority for different albums. I need way for them to indicate a preference of one album over another.
Next, I thought about letting them rearrange the albums in priority order or give each one a priority number. But there are 1,000 albums. I think that would be difficult to manage.
My current thinking is to define a limited number of priorities, like 5 or 10. There would be a box next to each album into which each neighbor could enter a number from 1-5 or 1-10. Then I would develop an algorithm to allocate the #1s as fairly as possible, then the #2s, etc.
It soon occurred to me that I’ll need to limit the number of albums each neighbor can select in each priority. If one neighbor marked all 1,000 albums priority 1 and I allocate them first, no one would get any of their lower priority albums. There are 1,000 albums and about 25 takers, so a limit of 10 albums in each priority level should allow each neighbor to get some good albums without taking them all away from those who made them their #2 or #3 choice.
I also plan to allow each neighbor to indicate that they will take any albums left over and not taken by anyone. My plan is to give away all of them.
Now for the actual allocation. I see several possibilities. I could first allocate all of the albums that were selected by just 1 person. This seems like a good start.
Now do I allocate the contested albums by priority or not? That is, do I allocate the priority 1 albums, then the priority 2, and so on? Or do I allocate them all in one loop?
I could sort the albums by the number of takers, then allocate them in that order starting with the 1-taker albums. I would then use a weighting mechanism to give priority to the neighbor that rated it #1 over one who rated it lower.
I also plan to weight the takers by the number of albums each already allocated. This will help ensure that each neighbor gets their fair share. Should I count all of the albums allocated so far including those that were not contested? On the one hand, it will help even out the total overall allocation. On the other hand, it seems unfair to punish someone for claiming an album that no one else wanted.
That’s as far as I have gotten so far. I’m working on a flowchart. I’ll post it when it’s ready.
Thanks for any suggestions.
|
|
|
|
|