I don't understand the point of that share. There are likely thousands of implementations of selection sort on the internet and so being able to recreate one isn't impressive in the slightest.
And all the models are identical in not being able to discern what is real or something it just made up.
No? I mean if they refused that would actually be a reasonably good outcome. The real problem is if they generally can write selection sorts but occasionally go haywire due to additional context and start hallucinating.
Because, to be blunt, I think this is total bullshit if you're using a decent model:
"I've even caught it literally writing the wrong algorithm when asked to implement a specific and well known algorithm. For example, asking it "write a selection sort" and watching it write a bubble sort instead. No amount of re-prompts pushes it to the right algorithm in those cases either, instead it'll regenerate the same wrong algorithm over and over."
FWIW, here's 4o writing a selection sort: https://chatgpt.com/share/67e60f66-aacc-800c-9e1d-303982f54d...