I don't think they're confused, I think they're approaching it as general AI research due to the uncertainty of how the models might improve in the future.
They even call this out a couple times during the intro:
> This feature was developed primarily as part of our exploratory work on potential AI welfare
> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future
The biggest enemy of AI safety may end up being deeply confused AI safety researchers...