This is a draft to explore ideas for the novel “The Shape of Water”. It will jot be included in the word count for Nanowrimo.
May 23, 2021.
There really isn’t anything that can’t be predicted.
That’s the premise of my algorithms, but to hear people’s reactions you would think I had said I was working on cold fusion. Bloody hell to them is all I want to say. Why is it so hard to understand that history is simply data? Data has trends? Trends tend to repeat, and those repetitions can be predicted reliably?
Very reliably if you work at your equations as hard as I am.
I would understand if I was trying to sell them yet one more glorified stock market analysis tool, or some bullshit lottery number generator. I could even take it if they asked me to generate a probability on their getting lucky at the strip club tonight … which I could given enough data … but when faced with my research I cannot abide by the looks of incredulity presented to me by people I know should understand. They are acting like I have a man inside the desk typing out the correct answers, like the automata’s from two centuries ago, and they simply haven’t found the secret hatch yet. Why is it so hard for them to comprehend that data is all we need to understand our world, and that with enough of it we can postulate probabilities and outcomes with an almost eerie accuracy.
If they ask me when the next stock market crash is going to come, I am going to tell them when the next stock market will fold, and the names of the banks it takes with it. I’ll even give them the initials of the 100 people most responsible for it happening. Nobody likes a smart-ass, right? Especially when he can predict the day you’re going to die.
Good news. The first 17th century lead came through for me brilliantly! The sun was unreal, and I am not used to being outside so much … everyone had to comment on my “healthy glow” when I got back … but after only about three hours of searching at the indicated site I found it. The quantity was almost exactly correct as well, so even with the big wheeled cart I needed four trips to get it all to the transport I rented. It was worth it. Once I have it all processed I will have about 6630 ounces of nearly pure gold that I can sell anywhere, no historical investigations required. If the next 17th century lead is as good as this one I will be able to say goodbye to fund raising for the next five years at least. This one find alone will probably get me almost 17 million, assuming gold is still about $2540 an ounce. The quantities estimated for the second lead are even larger, so here’s hoping they didn’t do too much exaggerating.
I found a gun at the site. Really old obviously, but the parts that were still intact where pretty clearly parts of a rather large pistol of some sort. I ran the image algorithms over it and it stated with 100% certainty that it was a “English Civil War Dog Lock Cavalry Pistol”. It fits with the story of this find. Ex-military robbing wealthy travellers would probably have kept their old service pistols, and these were apparently state of the art for that time. I wonder why it was left with the gold? I can only guess that as these men understood they must work as a team to succeed as a bandit horde, they also understood that any one of them could turn against the others. If each of them kept a personal stash of treasure for future use, in this case gold bullion, it would follow they might try to force another to give his up by force. Hence the pistol.
It seems foolish to me that someone hid this massive amount of gold billion coins, probably invested years of effort in their theft, but never returned to claim them. A secret, worth a fortune then and now, that was of no value to the bearer. I sure as hell won’t make the same mistake.
I am almost done getting the 18th century data formatted and ready. With any luck I will be able to put it through sometime early next week. A lot more data than the 17th century, but I suspect a lot more leads as well. I am not sure if I should do the work I had planned on the error correction sub routines before I run the new data package or after. They seemed to work well for this last run with the 17th century data, but I have this weird hunch they aren’t going to be as good with the volumes of data the following centuries have. It’s looking like the 18th century is going to be about 1 or 2 hundred exabytes, not terabytes, so this will be a good test if I decide to wait on the algorithmic upgrade.
Marj is coming over tonight. I wonder if I should tell her about the find?