Police officers are using AI chatbots to write crime

PLUS: Amazon Partners with Anthropic to Enhance Alexa

This week, we delve into how AI is being used by police for efficiency and the controversy surrounding it.

We also explore Amazon’s partnership with Anthropic and why they chose to use Claude to power Alexa.

Let’s dive in!

🥽 TRENDS

Amazon Partners with Anthropic to Enhance Alexa (🔗 link)

Amazon is set to release a revamped version of Alexa in October, primarily powered by Anthropic's Claude AI models rather than its own technology. This new "Remarkable" Alexa will be offered as a paid service, costing between $5 to $10 monthly, while the current "Classic" version will remain free. The decision to use Claude came after Amazon's in-house AI struggled with response times and coherence. This move marks a significant shift for Amazon, which typically prefers to use its own technology, and highlights the increasing importance of partnerships in the competitive AI landscape. The upgraded Alexa aims to offer more complex interactions, including shopping advice, news aggregation, and advanced home automation features.

California passes controversial AI safety bill (🔗 link)

The California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking one of the first significant AI regulations in the US. The bill requires AI companies in California to implement safety measures before training advanced foundation models, including mechanisms for model shutdown, protection against unsafe modifications, and risk assessment procedures. Despite initial opposition from major AI companies and some politicians, amendments were made to address concerns, such as replacing criminal penalties with civil ones and adjusting enforcement powers. The bill now awaits Governor Gavin Newsom's decision, with Anthropic expressing cautious support for the improved version, while OpenAI's stance remains unclear. If enacted, SB 1047 could set a precedent for AI regulation beyond California.

Nvidia Draws Antitrust Scrutiny as Enforcers Signal Early Interest in AI (🔗 link)

Nvidia, the world's largest maker of AI chips, experienced a significant market setback last Tuesday, losing $279 billion in market capitalization as its stock fell by nearly 10% amidst a global selloff. The company's CEO, Jensen Huang, saw his net worth decrease by $10 billion. This financial blow comes as Nvidia faces increasing regulatory scrutiny, with the US Justice Department reportedly investigating potential antitrust violations in the AI chip market, where Nvidia controls about 80% share. The company is alleged to have pressured cloud providers and engaged in anticompetitive practices, drawing attention from US lawmakers and regulators both in the United States and Europe. This situation highlights the growing tension between Nvidia's market dominance in the booming AI industry and increasing concerns about fair competition and antitrust issues.

💡 SPOTLIGHT

Police officers use AI chatbots to write crime reports in OKC (🔗 link)

Police departments across the United States are experimenting with a new artificial intelligence tool that promises to revolutionize one of law enforcement's most time-consuming tasks: writing incident reports. This technology, developed by Axon, the company behind Tasers and body cameras, uses AI to generate first drafts of police reports based on audio from officers' body cameras.

In Oklahoma City, police officers have embraced the technology for its time-saving potential. The AI-generated report took just eight seconds to produce, compared to the 30 to 45 minutes Gilmore would typically spend writing up a report manually.

Proponents argue that the technology allows officers to spend more time on active policing rather than paperwork. However, the introduction of AI-generated police reports has raised significant concerns among legal scholars, prosecutors, and community activists. Critics worry about the potential for AI to perpetuate biases, introduce errors, or alter a fundamental document in the criminal justice system.

Andrew Ferguson, a law professor at American University, highlights the risks of AI hallucinations – instances where the AI might generate convincing but false information. "I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing," Ferguson warns, emphasizing the critical role police reports play in determining whether an officer's actions justify someone's loss of liberty.

As police departments like Oklahoma City's cautiously implement the technology, they are setting initial boundaries. For now, the AI tool is only used for minor incident reports that don't lead to arrests. However, in other cities like Lafayette, Indiana, officers are permitted to use the AI for any type of case.

The debate surrounding AI-generated police reports underscores the need for careful consideration and potential regulation as this technology becomes more widespread. While it offers undeniable benefits in efficiency, the implications for justice, accountability, and community relations remain to be fully understood.

That’s a wrap!

We’ll see you again next week. Please send us your thoughts and any ideas you have to improve this content.

If you are implementing AI into your business and would be willing to share your use case with our team, we would love to include you in our newsletter. Please send any examples to the email below.

If you have any questions you can reach out to us at [email protected]

Cheers,

The Augmented AI Team