|
A fairly common occurrence in embedded products.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
Yes, directly in Interrupt Service Routines to keep an electric motor running at defined RPM and torque. Depending on the manifacturer you can have 4k or 8k samples per second in order to apply the Park and the Clarke transformations (and to run the PI observer). Such transformations are quite heavy since they are 3D geometrical tranforms in the Complex numbers.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
This is where assembler kicks in.
I think I would enjoy your job.
|
|
|
|
|
We actually don't need it here, SoCs are well designed and have 1 cpu-cycle RAM access that really helps, and the optimizer is good.
I had to turn to Assembler on an Intel system to perform high speed image manipulations (i.e. rotating an image 90 degrees in any direction with any mirroring in microseconds, normalizing the gray levels...). SIMD instructions are fun, even if SSE are asinine sometimes.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Hmmm. A 6502 running at a few GHz...
One thing is certain - you wouldn't need a heater in the car.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: Hmmm. A 6502 running at a few GHz...
Don't you mean MHz?
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
6502 did run up to 3 MHz.
Imagine how it would perform at thousand times the original top speed! Apple II Mark II could turn out to be a hefty machine ...
I don't know enough of hardware to tell if it would be at all possible to make a 3 GHz version. You would probably have to abandon 99% of the implementation technology, but maybe you could retain the instruction set, memory model etc.
One of my dreams is that "someone" had the resources to pick up some of the concepts/architectures abandoned a generation ago because the technology wasn't ready for it. Take the iAPX 432, an object oriented machine taking the object concept to extremes. E.g. you could send an object to another process, but then you lost it yourself. Obviously, you would have to review and update the architecture; e.g. the original 432 could only offer 8 Ki objects per process. I am quite sure that the technology we have today would be capable of implementing a 432 Mark II that would both have sufficient functionality and performance to be useful.
I am not suggesting that you could make the market accept an object oriented processor, though. When anyone mentions the tremendous speed of technological change in the digital age, one of my primary counter arguments is that the 1978 X86 architecture still is dominating, 45 years later (although in revised versions - or cancerous versions, if you prefer).
|
|
|
|
|
I meant what I wrote.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Overlays (in 4K):
Load "read a card"
Read a card
Load "process card"
Process card
Load "write"
Write output
Load "read a card"
...
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
The Commodore PET 2001 only had a single tape recorder for I/O, which was slow as molasses. Using it for any overlay mechanism might have been technically possible, but would have been impractical.
Later models had optional diskette drives (I don't recall if an HDD was available), but they also had much more memory (up to 96K, IIRC).
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Are you sure that cost is the only factor? When I was working with embedded/IoT chips, the main reason for not extending RAM was to save power. In a number of applications, the cost of replacing batteries more often can raise the maintenance costs by several times the extra cost for the chip. Maybe our strongest sales argument was that you could build devices that would run a year or two on a small button cell. (We sold chips, we were not buyers.) The main reason why the old, small chips are still being sold in truckloads is not the low price, but the low power consumption.
Yes, I have seen my share of dirty tricks to cram the necessary functionality into a total 64 kibytes. There was a lot of bitching and swearing. My first assignment was to implement the complete Bluetooth Test Mode in at most 1200 bytes. (I clocked in a 1103 bytes.) The company was awarded a patent for a method to delay turning on the Bluetooth receiver a fraction of a microsecond later, to save power.
Maybe, in your company, chip cost is the only argument, that you have no concern about battery life. It could also be that you have not heard all the arguments your management has collected. Maybe the sales force has reported that customers are complaining about battery life. Maybe management never brought that down to you. Maybe they did, but not in flashing, bold, red letters, so you overlooked it.
If cost really is the only argument for a smaller chip: Your management seems to completely ignore the cost of all those twists and tricks the developers have to do to fit everything in. When we went from an 8-bit (8051) to a 16-bit (ARM M0), developer productivity raised sharply, because we could spend time on programming the solution rather than on tricks to get around hardware limitations.
|
|
|
|
|
Not power consumption, when it needs to go in power saving it goes in deep sleep and only the network transceiver is powered on, with the edge triggering directly the power-on pin.
When in operation it pilots motors that range from 400 to 1500 watts so a milliwatt here and there are not an issue.
I did work on a MCU where power consumption in sleep was measured in microamperes (2 months of work to lower from 520 uA to 499 uA) but this is not the case.
It's costs and it comes down straight from our overlords from Hong Kong.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
now I have a big table in a database, what is the best way to get understanding of this table quickly? it has hundreds of fields and I only know some keys.
I have some basic ideas already, but I would like to learn some new tricks from gurus here. Somehow I am a little addicted to ask questions here.
diligent hands rule....
|
|
|
|
|
Is this a NoSQL table, or a flattened table as is common in high performance environments such as banking?
|
|
|
|
|
it is a high performance table in Teradata...
diligent hands rule....
|
|
|
|
|
What type of database? If it's SQL Server use
exec sp_help '<your table name>' , Oracle:
sp_helptable '<your table name>' , anything else - use Google. Informix or Interbase - you are out of luck...
Advertise here – minimum three posts per day are guaranteed.
|
|
|
|
|
very helpful. Thank you!
diligent hands rule....
|
|
|
|
|
Southmountain wrote: now I have a big table in a database, what is the best way to get understanding of this table quickly? Documentation. If there's a table, there's a developer and there should be documentation.
Southmountain wrote: it has hundreds of fields and I only know some keys. Hundreds of fields??
DROP TABLE would be the best start; no normalized table contains that much fields.
I'm serious; no such table should exist. You asking how to understand it implies no documentation either.
Name your company.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: DROP TABLE would be the best start;
lol...
I worked with a table like that. It was batch loaded every night from some other mysterious source that was definitely COBOL and probably DB2. What I did know was that on the COBOL side they had reached the maximum number of columns of the system. It would not allow them to add any more.
I think there was something like 300 or 400 columns.
But 200 or so were just for a single indexed value. So something like column 30 had an int. Then the value in that column pointed to one of another sequential 200 columns with a value. The other 200 columns were null.
Probably could not have dropped it. It held credit card transaction data.
|
|
|
|
|
If you design a database, you normalize the model.
jschell wrote: I think there was something like 300 or 400 columns Give or take 50 columns.
That's not design, that's a disaster.
jschell wrote: Probably could not have dropped it. It held credit card transaction data. That's why I stopped visiting the hospital. I don't wont to die by VB6.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: That's why I stopped visiting the hospital. I don't wont to die by VB6 You'll be fine if you believe in reincarnation...
On Error Resume Next Life
|
|
|
|
|
In the early 1980s, one model that was proposed was 'the universal relation'. The data base had a single relation (table), for all applications. A new application might need some new fields/columns, and added those, but usually it also made use of columns already in the universal relation.
There was at least one implementation of this model - I'm sorry, I can't remember what it was called - and the developers claimed that having everything in one relation drastically simplified some query optimizations. I see that the idea even has a brief Wikipedia entry: Universal relation assumption[^] stating that "real database designs is often plagued with a number of difficulties". So there were reasons why it didn't succeed. Yet, it did have some pros and benefits as well. Maybe those designing this relation you have been introduced to were trying to collect some of those.
The Wikipedia article links to a slide set for a talk, "Who won the Universal Relation war?". It is very much a slide set - you can't learn much about Universal Relations from it. But it gives you a certain impression of the magnitude and intensity of the debate, 30-40 years ago.
|
|
|
|
|
There's a good reason why it is not practiced anymore:
It didn't work.
--edit
I still like the story though.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
The only acceptable reason for having such a flat schema would be performance, and there are many, many better ways of capturing the performance required if that's a concern. At the heart of it, that relations table would have a lot of null values and would seem to only simply joins - in which case perhaps they should just better learn SQL views if they wish to reduce joins.
For the performance side, if read speed needs to be optimized, it's ok to have a flatten, cached table or NoSQL doc storage with flattened data that is hydrated from the unflattened tabled in a one-way sync. But the core data model that's the source of truth shouldn't be janky.
Jeremy Falcon
|
|
|
|
|
 Although it used SQL as the backend, I remember a Customer Relationship Management system called Maximiser that took a similar approach. There were, ISTR just two tables, one to hold all the relatively constant client data itself and one to hold the collection of notes linked to that.
In the Maximiser app there were complex joins on one table producing 'subtables' that held various views on the data. Some columns contained numbers that indicated what other columns actually held! I was given the job of moving all the data held in this system to another SQL based program.
It took ages (in the absence of any database schema documentation) to unravel the various actual combinations of joins required to get what we wanted. Here's just one query to extract a little of the info: All the tables named as a, b, c, d etc duplicate joins used in 'built-in' queries on the maximiser database.
I thought you might find an example of the stuff I had to build mildly amusing 8)
-- Build the View of the Maximiser data that shows what we want and store it
SELECT
CASE
WHEN c.Record_Type = 1 THEN c.Name
WHEN c.Record_Type = 31 THEN d.Name + ' - ' + c.First_Name + ' ' + c.Name
WHEN c.Record_Type = 2 AND len(c.Firm) > 0 THEN c.Firm
WHEN c.Record_Type = 2 AND len(c.Firm) < 1 THEN c.First_Name + ' ' + c.Name
WHEN c.Record_Type = 32 THEN
(
CASE
WHEN len(d.Firm) > 0 THEN d.Firm + ' - ' + c.First_Name + ' ' + c.Name
WHEN len(d.Firm) < 1 THEN d.First_Name + ' ' + d.Name + ' - ' + c.First_Name + ' ' + c.Name
END
)
ELSE c.Name
END AS Company,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Address_Line_1
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Address_Line_1
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Address_Line_1
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Address_Line_1
ELSE c.Address_Line_1
END AS Address_1,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Address_Line_2
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Address_Line_2
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Address_Line_2
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Address_Line_2
ELSE c.Address_Line_2
END AS Address_2,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.City
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.City
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.City
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.City
ELSE c.City
END AS City,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.State_Province
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.State_Province
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.State_Province
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.State_Province
ELSE c.State_Province
END AS State,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Zip_Code
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Zip_Code
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Zip_Code
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Zip_Code
ELSE c.Zip_Code
END AS Zip,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Country
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Country
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Country
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Country
ELSE c.Country
END AS Country,
CASE
WHEN n.Type = 0 THEN 'Manual Note'
WHEN n.Type = 1 THEN 'Mail - Out'
WHEN n.Type = 2 THEN 'Phone Call'
WHEN n.Type = 3 THEN 'Timed Note'
WHEN n.Type = 4 THEN 'Transfer'
WHEN n.Type = 5 THEN 'Task'
WHEN n.Type = 6 THEN 'Reserved'
WHEN n.Type = 7 THEN 'Reserved'
WHEN n.Type = 8 THEN 'Opportunity'
WHEN n.Type = 12 THEN 'Customer Service'
ELSE 'Unknown'
END AS Activity_Type,
n.DateCol, n.TextCol, n.Owner_Id, n.Client_Id, n.Contact_Number, n.Note_Type,
' ' AS sndex
INTO BookerNotes
FROM
dbo.AMGR_Client_Tbl AS c
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS d ON c.Client_Id = d.Client_Id AND d.Contact_Number = 0
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS e ON c.Client_Id = e.Client_Id AND c.Address_Id = e.Contact_Number
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS f ON c.Client_Id = f.Client_Id AND c.Address_Id = f.Contact_Number
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS g ON c.Client_Id = g.Client_Id AND g.Contact_Number = 0
RIGHT OUTER JOIN dbo.AMGR_Notes_Tbl AS n ON c.Client_Id = n.Client_Id AND c.Contact_Number = n.Contact_Number
WHERE c.Record_Type IN (1, 2, 31, 32)
GO
|
|
|
|
|