We are now operating well beyond our current understanding of these algorithms. Sure we know it greedy gradient descent, etc. But I am always surprised it turns out to be the very littlest thing (like the asymmetric RELU cost function) that delivers transformative results. Had someone told me that was a break through at the time I would have laughed. but it was. In the same ways, we really don't know if they way the training feedback is being handled between these distinct models (or something else) will turn out to be a different break through. Even the folks at Open AI don't know! It will take years to understand these new algorithms and turn them into science.
In the meantime I think we should measure break though status based on the capabilities of the systems. and GPT-4 is a freaking BREAK THRU! Maybe it will be base purely on "more of the same" but I doubt it. I bet certain aspects of what they have done will turn out to be critical. And over time this will become clear to open AI.... and later it will be clear to the rest of us too.