Yeah, cuneiform was interesting in terms of the medium and how much and how broadly it survived. Their folk tale in terms of how they received the writing was that someone from the ocean arrived and was trying to communicate and pressed reeds into the wet mud.
I sometimes wonder if there was an Aegean earlier Bronze Age/prehistory writing system (like the one found on the Dispilio tablet) that has been lost to the ages because it was on a temporary medium and then the Sumerians ended up with a version of writing that persisted in a loosely similar way to their folk history.
The irony being that the surviving records of antiquity are literally just predominantly the royal propaganda because those were carved into stone which lasted and other writing formats didn’t survive.
The guy carving into the rock here in reality was doing so at the bidding of a guy who would have killed him if he didn’t write the version of reality he wanted recorded.
The idea that what was written down could be instantly disputed and checked against facts at all is the part this dude would find unbelievable.
I mean, isn’t this kind of keeping with the theme of US civil wars so far?
If I was creating a civil war bingo card based on history of civil wars in the US, “starts over how people with darker skin can be abused or not” would certainly have been on it.
Well, in the sake of pointing things out, GPT-4 can actually correctly answer the prompt, because it arrives at it in the opposite direction. It can tell the integer is even or odd and knows that even or odd integers in binary end in 0 or 1 respectively.
I’ve been looking into a tradition for the last few years that died out nearly 1,500 years ago that has me wondering the opposite.
How in the present day with the clear trajectory of science and technology we are currently working on do we not realize this ancient and relatively well known text isn’t some mystical mumbo jumbo but is straight up dishing on the nature of our reality?
I think there’s a stubbornness of thought that exists among most humans regarding what they think they know about life which blinds both the religious and non-religious.
It probably won’t happen until we move to new hardware architectures.
I do think LLMs are a great springboard for AGI, but I don’t think the current hardware allows for models to cross the hump to AGI.
There’s not enough recursive self-interaction in the network to encode nonlinear representations. So we saw this past year a number of impressive papers exhibiting linear representations of world models extrapolated from training data, but there hasn’t yet been any nonlinear representations discovered and I don’t think there will be.
But when we switch to either optoelectronics or colocating processing with memory at a node basis, that next generation of algorithms taking advantage of the hardware may allow for the final missing piece of modern LLMs in extrapolating data from the training set, pulling nonlinear representations of world models from the data (things like saying “I don’t know” will be more prominent in that next generation of models).
From there, we’ll quickly get to AGI, but until then I’m skeptical that classical/traditional hardware will get us there.