How are we regulating ChatGPT and other AI tools? | 21st Century Innovative Technologies and Developments as also discoveries, curiosity ( insolite)... | Scoop.it

ChatGPT is only two months old, but we've spent the time since it debuted debating how powerful it really is — and how we should regulate it. 

Just because it can be helpful doesn't mean it can't also be harmful: Students can use it to write essays for them, and bad actors can use it to create malware. Even without malicious intent from users, it can generate misleading information, reflect biases, generate offensive content, store sensitive information, and — some people fear — degrade everyone's critical thinking skills due to over-reliance. Then there's the ever-present (if a bit unfounded) fear that RoBoTs ArE tAkInG oVeR.

And ChatGPT can do all of that without much — if any — oversight from the U.S. government.