Software

AI + ML

Law secretly drafted by ChatGPT makes it onto the books

'Unfortunately or fortunately, this is going to be a trend'


The council of Porto Alegre, a city in southern Brazil, has approved legislation drafted by ChatGPT. 

The ordinance is supposed to prevent the city from charging taxpayers to replace any water meters stolen by thieves. A vote from 36 members of the council unanimously passed the proposal, which came into effect in late November. 

But what most of them didn't know was that the text for the proposal had been generated by an AI chatbot, until councilman Ramiro Rosário admitted he had used ChatGPT to write it.

"If I had revealed it before, the proposal certainly wouldn't even have been taken to a vote," he told the Associated Press.

This is the first-ever legislation written by AI to be passed by lawmakers that us vultures know about; if you know of any other robo-written laws, contracts, or interesting stuff like that, do let us know. To be clear, ChatGPT was not asked to come up with the idea but was used as a tool to write up the fine print. Rosário said he used a 49-word prompt to instruct OpenAI's erratic chatbot to generate the complete draft of the proposal. 

At first, the city's council president Hamilton Sossmeier disapproved of his colleague's methods and thought Rosário had set a "dangerous precedent." He later changed his mind, however, and said: "I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend."

Sossmeier may be right. In the US, Massachusetts state Senator Barry Finegold and Representative Josh Cutler made headlines earlier this year for their bill titled: "An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT."

The pair believe machine-learning engineers should include digital watermarks in any text generated by large language models to detect plagiarism (and presumably allow folks to know when stuff is computer-made); obtain explicit consent from people before collecting or using their data for training neural networks; and conduct regular risk assessments of their technology.

Using large language models like ChatGPT to write legal documents is controversial and risky right now, especially since the systems tend to fabricate information and hallucinate. In June, attorneys Steven Schwartz and Peter LoDuca representing Levidow, Levidow & Oberman, a law firm based in New York, came under fire for citing fake legal cases made up by ChatGPT in a lawsuit.

They were suing a Colombian airline Avianca on behalf of a passenger who was injured aboard a 2019 flight, and prompted ChatGPT to recall similar cases to cite, which it did, but it also just straight up imagined some. At the time Schwartz and LoDuca blamed their mistake on not understanding the chatbot's limitations, and claimed they didn't know it could hallucinate information.

Judge Kevin Castel from the Southern District Court of New York realized the cases were bogus when lawyers from the opposing side failed to find the cited court documents, and asked Schwartz and LoDuca to cite their sources. Castel fined them both $5,000 and dismissed the lawsuit altogether. 

"The lesson here is that you can't delegate to a machine the things for which a lawyer is responsible," Stephen Wu, shareholder in Silicon Valley Law Group and chair of the American Bar Association's Artificial Intelligence and Robotics National Institute, previously told The Register.

Rosário, however, believes the technology can be used effectively. "I am convinced that ... humanity will experience a new technological revolution. All the tools we have developed as a civilization can be used for evil and good. That's why we have to show how it can be used for good," he said. ®

PS: Amazon announced its Q chat bot at re:Invent this week, a digital assistant for editing code, using AWS resources, and more. It's available in preview, and as it's an LLM system, we imagined it would make stuff up and get things wrong. And we were right: internal documents leaked to Platformer describe the neural network "experiencing severe hallucinations and leaking confidential data."

Send us news
53 Comments

Tech world forms AI Alliance to promote open and responsible AI

Everyone from Linux Foundation to NASA and Intel ... but some big names in AI are MIA

Trust us, says EU, our AI Act will make AI trustworthy by banning the nasty ones

Big Tech plays the 'this might hurt innovation' card for rules that bar predictive policing, workplace emotion assessments

AI threatens to automate away the clergy

Is divine intervention next on the tech to-do list?

Creating a single AI-generated image needs as much power as charging your smartphone

PLUS: Microsoft to invest £2.5B in UK datacenters to power AI, and more

Don't be fooled: Google faked its Gemini AI voice demo

PLUS: The AI companies that will use AMD's latest GPUs, and more

The AI everything show continues at AWS: Generate SQL from text, vector search, and more

Invisible watermarks on AI-generated images? Sure. But major tools in the stack matter most

Google launches Gemini AI systems, claims it's beating OpenAI and others - mostly

Gemini accepts text, images, audio, and video and comes in three flavors

Mere minority of orgs put GenAI in production after year of hype

Folks are dipping their toes in without a full commitment

EU running in circles trying to get AI Act out the door

Bloc risks missing out on first-to-legislate status if timetable slips

HPE targets enterprises with Nvidia-powered platform for tuning AI

'We feel like enterprises are either going to become AI powered, or they're going to become obsolete'

AMD slaps together a silicon sandwich with MI300-series APUs, GPUs to challenge Nvidia’s AI empire

Chips boast 1.3x lead in AI, 1.8x in HPC over Nv's H100

Boehringer Ingelheim swaps lab coats for AI algorithms in search for new drugs

Mixing IBM's foundation models and proprietary data to discover novel antibodies