Law firm restricts AI after 'significant' staff use

Published On Feb 11, 2025, 7:04 PM

Hill Dickinson, a UK law firm, has restricted access to AI tools like ChatGPT, citing a significant increase in unauthorized usage among staff. They are implementing a request process for access, emphasizing safe and effective use aligned with their AI policy. The firm acknowledges the potential of AI to enhance capabilities but stresses the importance of proper training and understanding. Regulatory bodies are encouraging firms to embrace AI technology properly rather than banning its use. A survey indicates a growing expectation for AI usage in the legal sector.

Stock Forecasts

AI

Positive

The legal sector is increasingly adopting AI tools, leading to potential growth for companies that provide AI solutions for legal applications. Companies that are well-positioned to support law firms in integrating AI efficiently could benefit. Given the growing push for AI acceptance in the legal field, investments in responsible AI technology firms could yield positive results. Furthermore, training and compliance services for AI in legal settings may see increased demand.

HACK

Negative

Firms discouraging AI usage face risks related to compliance and operational efficiency. If law firms struggle to integrate AI effectively and safely, it may impact any associated solution providers negatively. This could create hesitance in investing in companies perceived as non-compliant or misaligned with emerging technological trends.

Related News

Modern religious leaders are experimenting with A.I. just as earlier generations examined radio, television and the internet.

AI
GOOGL

(Reuters) -OpenAI on Friday outlined plans to revamp its structure, saying it would create a public benefit corporation to make it easier to "raise more capital than we'd imagined," and remove the restrictions imposed on the startup by its current nonprofit parent. The acknowledgement and detailed rationale behind its high-profile restructuring confirmed a Reuters report in September, which sparked debate among corporate watchdogs and tech moguls including Elon Musk. At issue were the implications such a move might have on whether OpenAI would allocate its assets to the nonprofit arm fairly, and how the company would strike a balance between making a profit and generating social and public good as it develops AI.

Some incidents may be linked to North Korean IT workers infiltrating tech firms, according to research firm Chainalysis.