I would join a camp that would argue that AI already can do those things or is closing in on them.
Its contextual awareness does suffer at the moment.
In case of ChatGPT it has to do with it being able to hold around 8000 words of current context.
It has a lot of knowledge to share and capability to transform given knowledge but size of context it needs to work with is not bigger then 8000 words.
But OpenAI is testing context of 32000 words now and its competitor Anthropic Claude released already 100000 words context model.
Now issue is not that model can not take things in to account, question now is what is efficient UX of providing AI with so much context fast. Not that you are gonna describe your situation in 100000 words in a chat.
It needs to be provided in background, it needs to know who, when, where its speaking to and have access to contextual information users is not adding himself.
Second thing about critical thinking. Current ChatGPT due to cost/performance reasons is not really thinking. It does not have an internal dialog.
But there are already experiments that run multiple such models for multiple iterations where the debate each other to get to conclusions.
It was already shown that such approaches raise quality of its output significantly.
So, where I agree is that ChatGPT at the moment does fail at those 3 things.
But if cleverly and expensively used it can almost fix it already now.
Add to that exponential speed of change in multiple directions that combined make it even faster then exponential. And in a year all of these shortcomings of ChatGPT could be false.
There is progress on algorithms, hardware, ways to actually use these models.
So, I do not find these arguments strong to stand for a year to be honest.