The closest thing I've seen to sentience over the years … which. let’s face it, is what people really mean when they talk about (General) AI, rather than ‘intelligence’ … was Lenat’s Eurisko.
Because, rather than being designed in from the start, the ‘entity’ that kept signing its name to its work spontaneously emerged from its logic substrate without being in any way encouraged.
But I’d hesitate to label it sentience as such … more sort of an interesting side-effect that merited exploration rather than the brain-surgery that was applied to it.
But, no, like you I regard today’s efforts as simply sophisticated simulacra — ‘intelligent’, perhaps, but not sentient. I see no evidence of the Chinese Room being refuted.