There is a real existential risk from AI, but articles like this one do us a dis-service, since it amounts to crying wolf.
Present chat systems might pass many tests showing they beat humans, but they don't really represent sustained focus on a single jobs as required in order for AI to operate w/o human interaction as needed for a dangerous AGI scenario to develop.
Any long running task like developing a new drug or chip or application would require autonomy that simply does not exist in these AI models (YET).
So Tomas' warning here, represents 'crying wolf' so that as the models do become effective in an autonomous we will roll our eyes, because we have heard the warning so many times before.
We need to do better in understanding the risk and reporting on it.