The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Using a "moving window", you have the following cases:
(Last throw was NOT 20): occurs 19/20 times, chance of 1/8000 of throwing three 20s in the next three moves
(Last throw was a 20, next to last throw was something else): Occurs 19/400 times, chance of 1/400 of throwing two 20s in the next two moves
(last two throws were a 20, previous throw was something else): occurs 19/8000 times, chance of 1/20 of throwing a 20 in the next move
(last three throws were a 20): occurs 1/8000 times.
Any of the 4 cases may occur, so P is additive.
P = 19/20 * 1/8000 + 19/400 * 1/400 + 19/8000 * 1/20 + 1/8000 * 1 = 19/160000 + 19/160000 + 19/160000 + 20/160000 = 77/160000 ~ 1/2000- (slightly under 1/2000).
The difference from the triples result is because each 20 has a chance to participate multiple times. In this case, N(50%) would be 1440.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
I got into the special math class exactly over such debate (with normal dice). I stated that there is only two outcoms - you either throw a triple or not. So it is 50-50...
Three different teachers spen four years tontey and explain to me why I was wrong...
I was rigth...
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
These are my answers and I definitely could be wrong.
Andreas Mertens wrote:
Suppose you start rolling as described. If the first or second roll isn't a 20, do you finish through to the third roll?
1) There is no need to roll the die two more times because each die roll is independent of each other (mutually exlusive outcomes). However, you would simply increment your count of tries and failures that have occurred.
Andreas Mertens wrote:
What if instead of one 20-sided dice, you have 3 of these dice and roll all three at once? Would that change the odds?
2)Effectively no. Because each 20-sided die roll outcome is mutually exclusive to all others whether you roll them at the same time or individually. However, maybe there are some physics involved that affects the way the dice bounce against each other?? However, that isn't really counted in probability math. Instead it is the pure math of just the theoretical values (no physics involved).
Mutually exclusive meaning that if you roll a 20-sided die twice the second roll is in no way affected by the first roll. A non-mutually exclusive event might be like selecting one of three doors for a prize. Once you select a door it is then removed from the choices. So the next choice is only 1 out of 2.
Won't you need an initial estimate for Bayes, and then improve that estimate on each result?
You can already guess the 1:20 for the initial roll you don't need to do.. then look to the next roll to get two in a row, (19 chances of failure), then a final 50:50 hope for the last '20' with a 1:10 chance.
Needs a few more unknowns that need determining by measurement and Bayes update. Maybe the dice is weighted? If it was, how much would you pay per throw to give confidence of the weighting?
For my brother the accountant this Christmas, I got a big bag of receipts. I told him it was OK if he didn't like them, I'd kept all the presents…
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
This isn't even really enough for a tip but if you ever can't implement a real hashtable for reasons there's a pretty decent fake you can do for strings:
Store sz strings prepended by a size_t length (which includes trailing zero char) contiguously [len][string][len][string][len][string], like that
when you go to check for a match, start at the beginning.
get the size_t len from the current position
see if it matches your passed in string's length (don't forget to account for the trailing zero). if it doesn't, increment the current table pointer by len and continue
if it does, then in a loop check each character to see if they match. once you find one that doesn't match, you can increment the current table pointer by the remaining characters (computed from len and your current position)
this way you early out on non matches and you've got a remedial hash on top of that (the count)
all of that and you don't require an actual hashtable, and it's not bad in terms of space/time from my tests (currently on JSON fields in a document)
Hmmm... on a big-endian system -- if you limit each string's length to ensure that the high byte of the next string's length is always zero, you can avoid adding "trailing nulls" -- just terminate the array with a zero. Profit.
I once implemented a hash table almost exactly that way. The only difference was I put the hash value first, then the length. I did this because I had a lot of long symbols with very similar names plus it all sat in shared memory. This ended being considerably faster than not having the hash values. In my case, I was parsing a script language I wrote and this was for a table of imported DLL libraries and functions.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
Isn't that how any language implemented variable length strings before C arrived and "invented" NUL termination? (In conflict with long established international standards... If you really need a string terminator character, there is one in the ASCII / ISO646 character set; it is not NUL!)
For something like forty years, the common way of serializing mixed type data has been to take it one step further, adding a (binary) tag prefix, for a TLV (tag - length - value) format. Skipping through the fields of an unordered record is simple and quick.
One disadvantage is that if you run on a 64 bit CPU, and insist on 64 bit tag and length, then you have a 16 byte overhead per value. There are standards for packing both tag and length, not unlike UTF-8 encoding, so that a "small" tag and short length take up only one byte each - at the expense of more complex code when the small tag or length range is exceeded.
And, talking about NUL terminators: One standard way to terminate list of a variable number of fields (including a variable length array) is by a sentinel element of tag = 0, length = 0. If you want super-efficient code and use a packed format, one byte for each, you can test both as a single short to see if it is zero, in single instruction.