

Current LLMs are just that, large language models. They’re incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that “could take over the world” like Skynet is called “Artificial Generalized Intelligence”. We’re nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM “in charge” of an important task, that could cost lives if a mistake is made.
None of that’s true. Free speech laws try to prevent the government from arresting you for opinions or criticism. Social media platforms, parents, etc are still able to take action against statements without reason. The government can also put the blame on something else. If someone is critical of the government, they’re likely to have broken laws they don’t agree with.