|
Are you sure that cost is the only factor? When I was working with embedded/IoT chips, the main reason for not extending RAM was to save power. In a number of applications, the cost of replacing batteries more often can raise the maintenance costs by several times the extra cost for the chip. Maybe our strongest sales argument was that you could build devices that would run a year or two on a small button cell. (We sold chips, we were not buyers.) The main reason why the old, small chips are still being sold in truckloads is not the low price, but the low power consumption.
Yes, I have seen my share of dirty tricks to cram the necessary functionality into a total 64 kibytes. There was a lot of bitching and swearing. My first assignment was to implement the complete Bluetooth Test Mode in at most 1200 bytes. (I clocked in a 1103 bytes.) The company was awarded a patent for a method to delay turning on the Bluetooth receiver a fraction of a microsecond later, to save power.
Maybe, in your company, chip cost is the only argument, that you have no concern about battery life. It could also be that you have not heard all the arguments your management has collected. Maybe the sales force has reported that customers are complaining about battery life. Maybe management never brought that down to you. Maybe they did, but not in flashing, bold, red letters, so you overlooked it.
If cost really is the only argument for a smaller chip: Your management seems to completely ignore the cost of all those twists and tricks the developers have to do to fit everything in. When we went from an 8-bit (8051) to a 16-bit (ARM M0), developer productivity raised sharply, because we could spend time on programming the solution rather than on tricks to get around hardware limitations.
|
|
|
|
|
Not power consumption, when it needs to go in power saving it goes in deep sleep and only the network transceiver is powered on, with the edge triggering directly the power-on pin.
When in operation it pilots motors that range from 400 to 1500 watts so a milliwatt here and there are not an issue.
I did work on a MCU where power consumption in sleep was measured in microamperes (2 months of work to lower from 520 uA to 499 uA) but this is not the case.
It's costs and it comes down straight from our overlords from Hong Kong.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
now I have a big table in a database, what is the best way to get understanding of this table quickly? it has hundreds of fields and I only know some keys.
I have some basic ideas already, but I would like to learn some new tricks from gurus here. Somehow I am a little addicted to ask questions here.
diligent hands rule....
|
|
|
|
|
Is this a NoSQL table, or a flattened table as is common in high performance environments such as banking?
|
|
|
|
|
it is a high performance table in Teradata...
diligent hands rule....
|
|
|
|
|
What type of database? If it's SQL Server use
exec sp_help '<your table name>' , Oracle:
sp_helptable '<your table name>' , anything else - use Google. Informix or Interbase - you are out of luck...
Advertise here – minimum three posts per day are guaranteed.
|
|
|
|
|
very helpful. Thank you!
diligent hands rule....
|
|
|
|
|
Southmountain wrote: now I have a big table in a database, what is the best way to get understanding of this table quickly? Documentation. If there's a table, there's a developer and there should be documentation.
Southmountain wrote: it has hundreds of fields and I only know some keys. Hundreds of fields??
DROP TABLE would be the best start; no normalized table contains that much fields.
I'm serious; no such table should exist. You asking how to understand it implies no documentation either.
Name your company.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: DROP TABLE would be the best start;
lol...
I worked with a table like that. It was batch loaded every night from some other mysterious source that was definitely COBOL and probably DB2. What I did know was that on the COBOL side they had reached the maximum number of columns of the system. It would not allow them to add any more.
I think there was something like 300 or 400 columns.
But 200 or so were just for a single indexed value. So something like column 30 had an int. Then the value in that column pointed to one of another sequential 200 columns with a value. The other 200 columns were null.
Probably could not have dropped it. It held credit card transaction data.
|
|
|
|
|
If you design a database, you normalize the model.
jschell wrote: I think there was something like 300 or 400 columns Give or take 50 columns.
That's not design, that's a disaster.
jschell wrote: Probably could not have dropped it. It held credit card transaction data. That's why I stopped visiting the hospital. I don't wont to die by VB6.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: That's why I stopped visiting the hospital. I don't wont to die by VB6 You'll be fine if you believe in reincarnation...
On Error Resume Next Life
|
|
|
|
|
In the early 1980s, one model that was proposed was 'the universal relation'. The data base had a single relation (table), for all applications. A new application might need some new fields/columns, and added those, but usually it also made use of columns already in the universal relation.
There was at least one implementation of this model - I'm sorry, I can't remember what it was called - and the developers claimed that having everything in one relation drastically simplified some query optimizations. I see that the idea even has a brief Wikipedia entry: Universal relation assumption[^] stating that "real database designs is often plagued with a number of difficulties". So there were reasons why it didn't succeed. Yet, it did have some pros and benefits as well. Maybe those designing this relation you have been introduced to were trying to collect some of those.
The Wikipedia article links to a slide set for a talk, "Who won the Universal Relation war?". It is very much a slide set - you can't learn much about Universal Relations from it. But it gives you a certain impression of the magnitude and intensity of the debate, 30-40 years ago.
|
|
|
|
|
There's a good reason why it is not practiced anymore:
It didn't work.
--edit
I still like the story though.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
The only acceptable reason for having such a flat schema would be performance, and there are many, many better ways of capturing the performance required if that's a concern. At the heart of it, that relations table would have a lot of null values and would seem to only simply joins - in which case perhaps they should just better learn SQL views if they wish to reduce joins.
For the performance side, if read speed needs to be optimized, it's ok to have a flatten, cached table or NoSQL doc storage with flattened data that is hydrated from the unflattened tabled in a one-way sync. But the core data model that's the source of truth shouldn't be janky.
Jeremy Falcon
|
|
|
|
|
 Although it used SQL as the backend, I remember a Customer Relationship Management system called Maximiser that took a similar approach. There were, ISTR just two tables, one to hold all the relatively constant client data itself and one to hold the collection of notes linked to that.
In the Maximiser app there were complex joins on one table producing 'subtables' that held various views on the data. Some columns contained numbers that indicated what other columns actually held! I was given the job of moving all the data held in this system to another SQL based program.
It took ages (in the absence of any database schema documentation) to unravel the various actual combinations of joins required to get what we wanted. Here's just one query to extract a little of the info: All the tables named as a, b, c, d etc duplicate joins used in 'built-in' queries on the maximiser database.
I thought you might find an example of the stuff I had to build mildly amusing 8)
-- Build the View of the Maximiser data that shows what we want and store it
SELECT
CASE
WHEN c.Record_Type = 1 THEN c.Name
WHEN c.Record_Type = 31 THEN d.Name + ' - ' + c.First_Name + ' ' + c.Name
WHEN c.Record_Type = 2 AND len(c.Firm) > 0 THEN c.Firm
WHEN c.Record_Type = 2 AND len(c.Firm) < 1 THEN c.First_Name + ' ' + c.Name
WHEN c.Record_Type = 32 THEN
(
CASE
WHEN len(d.Firm) > 0 THEN d.Firm + ' - ' + c.First_Name + ' ' + c.Name
WHEN len(d.Firm) < 1 THEN d.First_Name + ' ' + d.Name + ' - ' + c.First_Name + ' ' + c.Name
END
)
ELSE c.Name
END AS Company,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Address_Line_1
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Address_Line_1
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Address_Line_1
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Address_Line_1
ELSE c.Address_Line_1
END AS Address_1,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Address_Line_2
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Address_Line_2
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Address_Line_2
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Address_Line_2
ELSE c.Address_Line_2
END AS Address_2,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.City
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.City
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.City
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.City
ELSE c.City
END AS City,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.State_Province
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.State_Province
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.State_Province
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.State_Province
ELSE c.State_Province
END AS State,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Zip_Code
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Zip_Code
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Zip_Code
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Zip_Code
ELSE c.Zip_Code
END AS Zip,
CASE
WHEN c.Address_Id > 0 AND c.Record_Type IN (1, 31) THEN e.Country
WHEN c.Address_Id < 1 AND c.Record_Type = 31 THEN g.Country
WHEN c.Address_Id > 0 AND c.Record_Type IN (2, 32) THEN f.Country
WHEN c.Address_Id < 1 AND c.Record_Type = 32 THEN g.Country
ELSE c.Country
END AS Country,
CASE
WHEN n.Type = 0 THEN 'Manual Note'
WHEN n.Type = 1 THEN 'Mail - Out'
WHEN n.Type = 2 THEN 'Phone Call'
WHEN n.Type = 3 THEN 'Timed Note'
WHEN n.Type = 4 THEN 'Transfer'
WHEN n.Type = 5 THEN 'Task'
WHEN n.Type = 6 THEN 'Reserved'
WHEN n.Type = 7 THEN 'Reserved'
WHEN n.Type = 8 THEN 'Opportunity'
WHEN n.Type = 12 THEN 'Customer Service'
ELSE 'Unknown'
END AS Activity_Type,
n.DateCol, n.TextCol, n.Owner_Id, n.Client_Id, n.Contact_Number, n.Note_Type,
' ' AS sndex
INTO BookerNotes
FROM
dbo.AMGR_Client_Tbl AS c
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS d ON c.Client_Id = d.Client_Id AND d.Contact_Number = 0
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS e ON c.Client_Id = e.Client_Id AND c.Address_Id = e.Contact_Number
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS f ON c.Client_Id = f.Client_Id AND c.Address_Id = f.Contact_Number
LEFT OUTER JOIN dbo.AMGR_Client_Tbl AS g ON c.Client_Id = g.Client_Id AND g.Contact_Number = 0
RIGHT OUTER JOIN dbo.AMGR_Notes_Tbl AS n ON c.Client_Id = n.Client_Id AND c.Contact_Number = n.Contact_Number
WHERE c.Record_Type IN (1, 2, 31, 32)
GO
|
|
|
|
|
Ooof, that would be nasty to maintain!
|
|
|
|
|
I just saw something similar the other day with additional complications.
The developer that inherited was trying to figure out how to make it a little more maintainable by removing some conditions that were nonsensical and others that were just bad hard codes.
|
|
|
|
|
old posting, but reddit had Key Value, then moved to what sounds like a few tables which are thing/data, so basically instead of just 1 key/value table, its many more key/value tables
|
|
|
|
|
yes, it has limited documentation ....
diligent hands rule....
|
|
|
|
|
I will double check the table column number and have very limited documents...
diligent hands rule....
|
|
|
|
|
Southmountain wrote: it has hundreds of fields
Presumably you mean 'columns'...
Southmountain wrote: what is the best way to get understanding of this table quickly?
It is unlikely there is a way to do it quickly. The number of columns suggest it is probably overloaded so there are multiple uses. Best you might be able to do quickly is determine how the data is created in the first place. And that would only be true if it is just a batch load.
|
|
|
|
|
I found MS Access and Excel, with some SQL management studio, good enough for "data analysis".
Access and Excel can connect to SQL server. You can then tap into their analytics and query ability.
There's also MS Power BI (Desktop), to top it off.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
following your ideas, I will try to load it into an Excel pivot table and play around with it...
diligent hands rule....
|
|
|
|
|
Having done this sort of thing in the past (and yes it was for the banking industry) you are going to need someone with domain knowledge, making an incorrect assumption on the relevance/relationship of a column can lead you down some nasty cul de sacs.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Delete a column, see who complains and then get them to explain what it's for.
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|
|