Are You Considering External AI Risks?
by
December 3, 2025
We often think about AI risks in narrow terms of risks emerging from our own AI use. This has led some to approach AI defensively, thinking that AI abstention or avoidance is the safest path forward. While a large portion of AI risk is generated internally, there are also external risks to consider and manage. A recent Bloomberg article by several partners at Fried Frank discusses these risks in a securities litigation context. The article cites the example of Tamraz, Jr v. Reddit. This ongoing litigation alleges that Reddit did not adequately disclose the risk external AI posed to its business model:
“In Tamraz, Jr. v. Reddit, Inc. et al., plaintiffs allege the company failed to disclose that changes in Google Search’s algorithm and AI Overviews features were altering user behavior. Specifically, the complaint alleges that Google Search’s changes in algorithm caused “zero-click search,” searches in which users stopped their query on Google Search’s AI Overviews, rather than clicking through to the Reddit website.
The complaint alleges that defendants knew that AI Overviews was reducing traffic to the Reddit website dramatically in a manner the company couldn’t quickly mitigate, yet failed to disclose that material fact.”
I won’t go too deep on the securities litigation piece of this, but Reddit could find itself in a bad position if the courts rule it omitted material risks in its disclosures. This case is a great example of how AI can disrupt your business, even if you aren’t actively using it. No amount of ignoring AI will make it go away. Companies should understand how AI impacts their sector, how their competitors are using AI, and develop strategies to mitigate potential harms while disclosing the material risks it poses.