|
I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.
(I have never seen TNEMMOC, but I have seen ERUDECORP. I suspect that it was a macro definition, though, made by someone hating IF-FI and DO-OD. Btw: It must have been in Sigplan Notices around 1980 one guy wrote an article "do-ob considered odder than do-od". The article when on to propose that a block be denoted by do ... ob in the top corners and po ... oq in the lower corners. Maybe it wasn't Sigplan Notices, but Journal of Irreproducible Results ).
I have come to the conclusion that a better solution is to store the parse tree, and do the mapping to word symbols only when presenting the code on the screen to the developer (with keywords indicating structure etc. read-only - you would have to create new structures by function keys or menu selections). This obviously requires a screen and 'graphic' style IDE, which wasn't available in the 1960s, and which required processing power that wasn't available in the 1960s. Today, both screens and CPU power come thirteen to the dozen.
One obvious advantage is that you can select the concrete symbols to suit your needs, mother tongue or whatever. A second advantage is that you never see any misleading indentation etc. - any such thing is handled by the IDE. This is of course closely connected to the third advantage: As all developer input is parsed and processed immediately, and rejected immediately if syntactically incorrect, there is no way to store program code with syntax errors.
Of course the immediate parsing requires more power than simply inserting keystrokes into a line buffer, but it is distributed in time: Spending 10 ms CPU for keystrokes separated at least 100 ms apart is perfectly OK (and you do it not per keystroke, but per token).
And, you save significant time when you press F5 (that is is VS!) to compile and run your program in the debugger: Some steps that are known for being time consuming are already done. Lexing and parsing are complete. F5 can go directly to the tree hugging stage, doing its optimizations at that level, and onto code generating, and the program is running before your finger is off that F5 key.
In my (pre-URL) student days, I read a survey of how various compilers spent its time. One extreme case was a CDC mainframe that spent 60% on its time fetching the next character from the source input file! Most computers (even in those days) are better at reading text input; yet, lexing and parsing (where maybe 90% of the error checking takes place) is a resource hog. The dotNet jitter is speedy much because it doesn't have to do much error checking (nor does it handle character input).
In other kinds of computer applications, such as document processing, CAD/CAM systems and lots of others, users have no problems accepting that their data are managed and stored in binary formats. Other branches in this discussion has most definitely shown that this is a tough one for software developers: Letting the 7-bit ASCII file go is like completely loosing control of their code. So I certainly do not expect the next version of VS to provide an option for storing my C# file as a parse tree!
I do that for this one project I am developing, though. We are not talking about C#, but a simplified version where (non-IT-people) specify what goes into selected action alternative, repeated actions etc. All C# red tape is hidden from the user, but the 'code' they specify is used to generate C# code inserted into a skeleton, which is then compiled behind the back of the user. The user sees everything is his mother tongue, and in a simplified manner that hides lots of details of minimal interest to the common user. So ... I know that this is a workable solution, in particular when making UIs for non-technical users. I am 100% certain that it would be equally workable for a programming language.
|
|
|
|
|
trønderen wrote: I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.
Excuse me to put my few pennies in your great discussion, however, I am afraid that in times of ALGOL-60 the "source code files" existed (only) in the punched cards! 
|
|
|
|
|
My freshman class was the last one to hand in our 'Introductory Programming' exercises (in Fortran) on punched cards. Or rather: We wrote our code in special Fortran coding forms, and these were punched by secretaries, and the card decks put in the (physical!) job queue.
The Univac 1100 mainframe did have an option for punching binary files to cards. I believe that the dump was more or less direct binary, zero being no hole, 1 a whole, 4 columns per 36 bits word. Such cards were almost 50% holes, so you would have to handle them with care! (I never used binary card dumps myself.)
The advantage of punch cards is that you had unlimited storage capacity. When we the following year switched to three 16-bit minis, full-screen editor and Pascal, we had 3 * 37 Mbyte for about a thousand freshman students. When the OS and system software had taken its share, each student had access to less than 100 kbyte on the average, and no external storage option, so we had to do frequent disk cleanups (I was a TA at the time, and the TAs did much of the computer management).
|
|
|
|
|
[edit]
I had written the response below before I noticed that other folks had already replied with similar stories.
[/edit]
trønderen wrote: I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.
That is correct - the source code was hand punched onto cards (80 column). I got quite adept with the multi-fingering buttons for each character. You then put the box of punched cards into a holding area where someone would feed them to the card reader (hopefully without dropping them and random sorting them). Then the job was run and you got a line printer listing delivered to the same holding area (where, hopefully, your card deck was also returned to) - this is the first time that you can see what the texts were that you had written. At University, the turn round time was 1/2 a day; at my first full time job it was nearer a fortnight; so computer run times were an insignificant part of the round-trip time.
Before Uni, I had to do coding sheets which were posted to the computer centre (round trip one or two weeks). This added an extra layer of jeopardy - would the cards be punched with the texts written on the coding sheets? The answer was almost invariably 'No' for at least three iterations; so the first run (enabling debugging) could be six weeks later than the date that you wrote the program.
|
|
|
|
|
jschell wrote: First, at least last time I created a compiler much less studied compilers it is not "simple" to replace keywords. You certainly must know the semantics of the abstract token. And you must know the concrete language you want to use. This is not a job for Google Translate. But we are not talking about defining a new programming language; just modifying the symbol table used by the tokenizer. That is magnitudes simpler than making an all new compiler.
jschell wrote: Second the latter part of that statement seems to suggest exactly the problem that compilers need to solve with the first part of what I said. Compilers (and interpreters) already convert key words into tokens. Those that do not are very inefficient (as I know since I had to work with one long ago.) Quite to the contrary. If you store the source code by their abstract tokens, the result of the tokenizers job (and maybe even after a fundamental structure parsing), that heavy job you are referring to is done "once and for all". Or at least until the code is edited, but then a reparsing is required only for that substructure affected by the edit. Further compilation would be significantly faster, as much of the work is already done.
We have had precompiled header files for many years. This is along the same lines, except that the pre-parsed code is the primary source representation, with a mapping back to ASCII symbols is only done for editing purposes, and only for that part of the code displayed to the programmer at the moment.
jschell wrote: Requirements, Architecture, Design are all in that natural language. All of those have much more impact on the solution than the actual code itself. In "a" natural language, yes. English. Far too often, the design language is English, even if the customer and end users have a different natural language.
The norm is that in the very first input steps from the customer, the native language is used. Then the developers retract to their ivory tower to create The Solution, as they see it. First: Then it is back to English, and second: The ties to whatever was discussed with the customer is next to non-existing. The software structure is not a more detailed break-up of the boxes you showed the customer. The modules, functions, methods, data structure names are unrelated to the terms used in the discussion with the customer. If the customer has a complaint, saying for example "We think that when handling this and that form, in this and that procedure, you are not calculating the field with the way it should be done. Can you show us how you do it?", first: Going from the functional module and the procedure in question, as identified in the discussion with the customer, to the right software module and method in the code, may be far from straightforward. Developers do it their own way; the customer presentation is only for the customer. Second: If the developers are willing to let the customer see the code (most likely they refuse!), there is a very significant risk that the customer will have a hard time recognizing his problem: The programmers do not know the professional terminology, so they mislabel terms. They have never recognized the natural breakdown of partial tasks, solving the problem in what to the customer appears as convolved ways. The programmers have never seen the real application environment, do not know how different attributes really belong together.
If the developers did not switch from a native language to English as soon as the customer is kicked out the door, far more real domain knowledge could be preserved in the solution. In the development process, the customer could provide a lot more domain knowledge, correcting and aiding the developers as they progress with the more detailed design, and onto the code design. A customer who knows the logic of the solution, even if he doesn't know the specific code statements, is able to provide a lot more helpful feedback, both in bug finding and in discussions about future extensions and additions.
I know very well that customers without a clue about computers can give very valuable input. They can discuss solution logic. They can discuss data flows and functional groups. I have done that in several projects, serving as a messenger between the end users and the development group. (And in one library project, when I put forward the wishes from the future users, the project leader sneered back at me: "F*** those librarians! ... That was his honest attitude towards the end users; they were a source of noise, nothing more.)
My attitude towards customers and end users is at the very opposite end of the scale.
|
|
|
|
|
I am working on ranking different social influencers based on a set of metrics.
Metrics collected:
• username
• categories (the niche the influencer is in)
• influencer_type
• followers
• follower grow, follower_growth_rate
• highlightReelCount, igtvVideoCount, postsCount
• avg_likes, avg_comments
• likes_comments_ratio (comments per 100 likes, use as in authentic indicator)
• engagement_rate
• authentic_engagement (the number of likes and comments that come from real people)
• post_per_week
• 1/post_like, 1/post_comment (total 12 latest posts)
• 1/igtv_likes, 1/igtv_comment (total 12 latest igtvs)
Here's how the data looks like:
Sample_data - https://drive.google.com/file/d/15obMah9pGI3CutOZMJNqfr3O95rLz2JS/view?usp=sharing
Objective: Rank the social influencers according to their influential power with the use of the metrics collected above.
There are a few ranking algorithms to choose from, which are:
a) Compute the score for influential power with Multi-Criteria Decision Making (MCDM) and rank it with regression
b) Create classification model and rank them through probability
c) Compute the score for influential power with Multi-Criteria Decision Making (MCDM) and rank it with machine learning model like SVM, Decision Tree and Deep Neural Network
d) Learning to rank algorithm like CatBoost
e) Trending algorithm
I would like to ask which algorithm above will be more suitable in this project and could you compare and provide the reasons for it? Any ideas will be much appreciated!
External links for algorithms:
1. [MCDM](https://towardsdatascience.com/ranking-algorithms-know-your-multi-criteria-decision-solving-techniques-20949198f23e)
2. [Catboost](https://catboost.ai/en/docs/concepts/python-quickstart)
3. [Trending algorithm](https://www.evanmiller.org/deriving-the-reddit-formula.html)
|
|
|
|
|
Ask each one how much money they make: the bank account algorithm.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I have a task to complete. I'm trying to learn Visual Studio. And I have another task of programatically manipulating data inside and existing Excel worksheet.
It will involve manipulating an Excel Spreadsheet. This will involve Opening, Closing, Saving, Modifying cells, etc.
I wish to also learn Visual Studio IDE. To do both, I'd like to write a simple program example (or see one) that uses Visual Studio, without any aftermarket libraries, like Panda or others.
Basically what I'd like to do is install Visual Studio and all relevantt tools that comes with it, to support this task?
I have already installed Visual Studio 2022. I also have the Community version on another computer. I think I have options on what compiler to use. That is, I think my Visual Studio comes with the usual suspects C#, C++, Python, etc. Correct me if I'm wrong.
Questions:
1) How hard is it to use just Visual Studio, and one of the included compilers to, Open, Close, Edit, Save, etc., Excel files?
2) What included compiler should I use? or that is, which is best for what I want to do?
3) Is there a good Visual Project Template, that illustrates doing what I need to do.
I might eventually use some of the add on libraries, but for now, I'd just like to keep it simple. I also realize that using something like Panda, or other adons, might actually make it easier. But the learning curve might be longer.
I just need to be pointed in the right direction, for my initial tasks.
Thanks
Mike
|
|
|
|
|
|
Thanks, Yeah, I realize that I need to know how to use whatever language I use. I'm pretty good at reverse engineering, so that's why I wanted some good examples. Mainly, at this point, I'm just trying to use Visual Studio. I'm not familar with this version. And I don't want to complicate the process by having to create and debug code to use it. Although, that might be useful. This way, if I can open a project, that is known good, all my effort will be in getting Visual Studio to run it.
I tried one of the VB Exapmple Excel projects, for instance, and right off the bat I see it's targeted to something wrong. It says it can't find the NET framework 3.5, and also one way to solve it is to install the SDK targeting package. Whatever that means. So now I'll begin to track that down. If I can successfully get this configured to run one of the examples, that is my goal accomplished for this part.
Thanks
|
|
|
|
|
Articles are generally for guidance, and will often be out of date with reference to the latest framework and version of Visual Studio. So unless you really understand what you are doing you are going to face some difficult problems.
|
|
|
|
|
Thanks,
I grabbed one from that list and opened with Visual Studio and it gave me a NET framework error. I installed the 3.5 version that it was missing and the program continued and opened and asked me for information to create the Excel spreadsheet. This is exactly what I was looking for.
Now, I'll look around on this and then when I understand the use of the IDE, I'll go find a more complex example.
Thanks
|
|
|
|
|
Excel has it's own programming language; it's called VBA.
You start by proving to yourself why you have to use C# instead.
I would "program" my Excel sheets (in VBA) to generate SQL table definitions, for example.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I understand that. I've been using Excel from version 1 (I think). But I'm not trying to learn excel. I'm trying to learn Visual Studio. It's rather obtuse, as far as I'm concerned. first started using MSVC decades ago, with Version 6. Haven't used it since.
So, I have a few goals (not one is learning Excel). I work in a test lab. Be have existing tools that spit out Excel files, when tests are completed. We then post process those. Most of the software where I work, use Visual Studio and C#.
So, I would like to learn more about C#. And get back to knowing how to use Visual Studio. That's why I was looking for good examples of using Excel, in Visual Studio. That way I have files that should be easy to open in Visual Studio.
My first task will be to be able to open one of the examples, then correct all the build errors (Net framework mismatch for instance), then execute one of the examples. At that point, then I can begin looking at the code for working with Excel files.
|
|
|
|
|
Member 13159493 wrote: looking for good examples of using Excel, in Visual Studio. You need to understand that it is not Visual Studio that you use to open Excel files, but .NET and one of its languages, e.g. C# or VB.NET. And Visual Studio is used as a support tool for .NET, not the other way round.
|
|
|
|
|
Well I know that. When I said that, I meant "open" with regards to seeing the files in Visual Studio. I know very well, that the programming language I choose opens the files for manipulation. Visual Studio is just the IDE, for editing and managing the files.
All i'm doing at this point is trying to click on a solution file, have all the files come into Visual Studio, fix all the errors when I click Start, THEN I will begin to investigate the actual programs. I have another computer, that already has more complex set of C# files that opens in Visual Studio. I can see all the files. But it's too complicated (with hundreds of files) to use as an example.
A perfect example for what I'm trying to do at this point, would be a solution program in C# or VB, with more than one file, that creates an excel file, adds a few columns, with header name (Col1, Col2, Col3), puts a few rows of data (any types of values), does a few calculations, maybe some selecton and sorting, saves the file.
Just understand I'm a hardware guy, with software interests. I won't ever have all the technical jargon down pat. Like my wife reminds me, "you are pretty good at everything, but not real good at anything". 
|
|
|
|
|
Up to this point, what you've described could be accomplished with a CSV file; which can be imported / exported from Excel and is easier to work with (IMO).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Why not to use Visual Basic for Applications (VBA)?
|
|
|
|
|
Hello!
Given a 2D array each node in the array may or may not be "alive" based on random values.
I'm trying to find a way to connect "live" nodes in this 2D array to each other so that no isolated groups are created during the process, meaning that from any live node I can reach any other live node via the created paths.
A picture to visualize the problem.[^]
Can someone point me in the right direction?
|
|
|
|
|
Hi,
You are not really giving enough information about the problem you are trying to solve. The screenshot implies that there are impassable nodes.
Member 15615426 wrote: Can someone point me in the right direction? If you are trying to find the Eulerian path check out Fleury's algorithm as that might be what you're are looking for.
|
|
|
|
|
You can use BFS or DFS to determine whether or not there is a path. The graph is not required to execute the bfs, but the matrix itself will be utilized as one. Begin the traversal from the upper right corner, and if there is a method to the lower right corner, there is a path.
|
|
|
|
|
Hello!
How really virtual sports games works ?
I know it use a random number generator to simulate the action and determine the outcome but I would like to know the whole mechanism works. How advance these algorithms are ? When a computer is functioning correctly, nothing it does is random so results of games can't be totally random, am I wrong ?
|
|
|
|
|
The "random number generator" (i.e. dice, before computers), and the probabilities of a certain event happening are what determine the outcome.
There are always a number of possibilities: winning, losing, completely defeated, routed, etc.; based on the scenario.
Based on other factors (numerical superiority, morale, condition, prior movements, weather, etc) the probability of a certain outcome is predicted (using prior history) and a possible result and consequence is assigned. The probably of a certain (dice) throw is then matched to the relative probability of the events to predict. One can use as many "dice" (random numbers) as are needed to cover the spread.
The throw is made, the number is matched to the probability assigned to a given outcome, and you have a result.
The key is that the probability of a given event varies with the factors in effect at that given time. You obviously need to capture (or simulate) all those factors in order to make a realistic "game".
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
How to mathematically solve the recurrence relations of the following form :
1. [T(n)=(2^n)T(n/2) + n^n][1]
2. [T(n)=4T(n/2) + n^(2)logn][2]
Is there a generic method to solve these?
I realize that master theorem is not applicable on these forms because in 1, 2^n is not a constant and 2 does not fall into any of the 3 cases of the master theorem.
[1]: https:
[2]: https:
|
|
|
|
|
Hi,
Dawood Ahmad 2021 wrote: Is there a generic method to solve these?
There are three ways to solve it.
1.) Masters theorem
2.) Substitution
3.) Recurrence tree
As you pointed out you cannot use Masters so you should investigate the other two methods.
|
|
|
|
|