The increase in AI-generated models like ChatGPT and Bing Chat raise legal challenges as they produce news articles and events that may be falsely attributed
The rise of generative AI models raises unprecedented legal challenges as they produce news articles and events that may result in libel.
AI models like ChatGPT and Bing Chat can now answer questions confidently and convincingly, even when the answers are hallucinated or falsely attributed to non-existent articles. However, due to how these models work, they do not know or care whether something is true, only that it looks true.
This raises serious concerns when AI models accuse people of crimes they did not commit or make false statements that are detrimental to their reputation. The legal drama has already begun, as Australian mayor Brian Hood was named by ChatGPT as having been convicted in a bribery scandal from 20 years ago, which was false and damaging to his reputation.
Who made the statement? Is it OpenAI that developed the software? Is it Microsoft which licensed it and deployed it under Bing? Or is it the software itself acting as an automated system?
The legal implications are complex and nuanced, as the laws and precedents defining defamation were established before this technology existed. As AI models become more integrated with mainstream products, replacing search engines and serving as sources of truth, they can no longer be considered toys but tools employed regularly by millions of people.
This is the beginning of a legal drama that will be interesting to watch as tech and legal experts attempt to tackle the fastest-moving target in the industry.
With a mixture of literature, cinema, and photography, Manish is mostly traveling. When he is not, he is probably writing another tech news for you!
Cut to the chase content that’s credible, insightful & actionable.
Get the latest mashup of the App Industry Exclusively Inboxed