|The title os this message might look like a joke, but it is actually very serious.
The other day I wanted to implement my own
BigInteger class, and I was using
bytes as the backing fields. Yet, I would only use values from 0 to 9 for each digit.
Bing AI suggested me to create an enum with values ranging from
D9 (I think their actual values are obvious).
Yet, using an enum like that doesn't forbid users from doing things like
(DecimalDigit)56 and pass 56 to an enum that was only supposed to support values ranging from 0 to 9.
Of course I can validate the values at run-time... but the entire purpose of using an enum was to avoid mistakes like that.
So, my solution was to create a
class (in fact, a
struct, but a class serves the same purpose) that has a private constructor, and has
public static readonly fields ranging from D0 to D9. This way, users outside of the class, except if they really want to mess up (like using unsafe reflection) cannot pass values that aren't in the 0-9 range.
This also reminded me of a job where we had one enum with like 20 values... and then, many, many, many switches to get the many different traits of those enums.
Wouldn't it be better to just have classes, with all the traits, and use the classes?
Aside from the use of the enum in
switch statements, they work the same in most cases, work even easier in cases where we usually had to use helper methods... and if a new trait is added, we have a single place (where the enum values are declared) to fix... with no chance of "forgetting" a
case in a
switch somewhere else.
What do you guys think?
public struct DecimalDigit
public static readonly DecimalDigit D0 = new(0);
public static readonly DecimalDigit D1 = new(1);
public static readonly DecimalDigit D2 = new(2);
public static readonly DecimalDigit D3 = new(3);
public static readonly DecimalDigit D4 = new(4);
public static readonly DecimalDigit D5 = new(5);
public static readonly DecimalDigit D6 = new(6);
public static readonly DecimalDigit D7 = new(7);
public static readonly DecimalDigit D8 = new(8);
public static readonly DecimalDigit D9 = new(9);
private DecimalDigit(byte value)
_value = value;
private readonly byte _value;
public byte ByteValue
get => _value;
public enum DecimalDigit:
Notice that although the enum version is smaller, if we need to add names for the values, in the class we just add a property, for the real enum, we create a helper method.
If we need to convert them to numbers, add an emoji or whatever, in the first version it is just a matter of adapting the class, while in the second it is a matter of creating more (and somewhat unrelated) methods.
Edit: I had some questions about why create a new decimal class. There is not a real need to create one. I just wanted to do it as an exercise. I can tell that .NET implemented BigInteger is way faster than my class. Yet, just by writing the UnsignedDecimalInteger I saw opportunities to write Quadbits (effectively, half of a hexadecimal value... or just 4 bits), so in one byte I can store 2 Quadbits. I also saw opportunities for caching of the internal buffers I use... and I am just "relearning" how to do math the "old way" using decimal values. I will, at some point, improve it to use 32 or 64 bits at once.
Also, one of the next steps, be it with BigInteger or my UnsigedDecimalInteger, is to create a BigDecimal or similar class. In fact, having a value alone (without caring about operations), I just need to have a value telling where the dot separating the integer part and the fractional part. Or, I can literally have two BigInteger (or similar), one for the left side, and one for the right side, of the decimal.
modified 1-Aug-23 18:09pm.