Trade Secret Governance For Norwegian AI-Based Predictive Policing

1. Waymo LLC v. Uber Technologies, Inc. (2017–2018)

Core issue

A former engineer allegedly transferred self-driving car trade secrets (LiDAR systems, sensor fusion algorithms) from Waymo to Uber.

Why it matters for predictive policing AI

Predictive policing systems rely on:

  • multi-source data fusion (crime reports, location data, social signals),
  • pattern recognition models,
  • geospatial prediction engines.

These are structurally similar to autonomous driving AI.

Court reasoning

The court held:

  • trade secrets include system architecture + ML pipelines, not just code,
  • even partial replication of architecture can constitute misappropriation,
  • forensic evidence (emails, downloads) is key.

Governance implication in policing AI

For Norwegian predictive policing:

  • vendor-built crime prediction models can be treated as protected trade secrets,
  • but law enforcement must ensure auditability without exposing full architecture,
  • strict separation between operational deployment and model training environments is required.

2. IBM v. Papermaster (2008–2009)

Core issue

IBM sought to prevent a senior executive from joining Apple due to “inevitable disclosure” of confidential chip design knowledge.

Legal principle: inevitable disclosure doctrine

Even without proven theft, courts may restrict employment if:

  • the employee’s knowledge is too sensitive,
  • and disclosure is highly likely in the new role.

Relevance to predictive policing AI

In Norway:

  • AI engineers working on police prediction systems may move to private security firms,
  • or international surveillance tech companies.

This creates risk of:

  • leakage of feature engineering logic,
  • bias-tuning strategies,
  • and crime prediction weighting systems.

Governance implication

Police AI governance must include:

  • post-employment restrictions for key model developers,
  • compartmentalization of model knowledge (no single engineer understands full pipeline),
  • strict role-based access control.

3. E.I. du Pont de Nemours & Co. v. Kolon Industries (2011)

Core issue

Systematic theft of Kevlar manufacturing trade secrets via ex-employees.

Court finding

  • Trade secrets include process knowledge and optimization methods
  • Misappropriation through hiring insiders is actionable
  • Heavy damages imposed for deliberate industrial espionage

Relevance to predictive policing AI

Predictive policing systems depend heavily on:

  • feature selection methods (crime predictors),
  • data cleaning pipelines (crime report normalization),
  • tuning thresholds for alerts.

These are “process secrets,” not just code.

Governance implication

Norwegian police AI vendors must enforce:

  • clean-room development for model replication,
  • strict hiring vetting for engineers from competitors,
  • logging of all access to training datasets and model weights.

4. Motorola Solutions, Inc. v. Hytera Communications Corp. (2017–2020)

Core issue

Large-scale theft of digital radio communication system designs.

Court holding

  • Cross-border misappropriation increases damages
  • Trade secret theft in complex systems is treated as corporate-level wrongdoing
  • Entire system architectures are protectable

Relevance to predictive policing AI

Predictive policing systems often include:

  • distributed cloud infrastructure,
  • real-time crime alert systems,
  • secure communication between police units.

These resemble Motorola’s system-level architecture.

Governance implication

For Norway:

  • predictive policing AI deployed across municipalities must ensure:
    • secure inter-agency data sharing,
    • encrypted model updates,
    • traceable deployment logs,
  • cross-border vendor hosting introduces legal exposure under trade secret law.

5. Peabody v. Norfolk & Western Railway (1917)

Core issue

Whether operational coal-blending logistics constituted a trade secret.

Legal principle established

A trade secret exists if:

  1. information has economic value,
  2. it is not generally known,
  3. reasonable efforts are made to keep it secret.

Relevance to predictive policing AI

This is foundational for determining whether:

  • crime prediction models,
  • hotspot forecasting algorithms,
  • or data enrichment techniques
    qualify as trade secrets.

Governance implication

Norwegian law enforcement agencies must:

  • demonstrate “reasonable secrecy measures” for AI systems,
  • even if parts of the system use public data,
  • ensure model training pipelines are access-controlled.

6. DuPont v. Christopher (1970)

Core issue

Defendants used aerial photography to secretly observe an industrial plant under construction.

Court ruling

Even lawful means of observation can be improper if they circumvent reasonable secrecy protections.

Relevance to predictive policing AI

Predictive policing systems often rely on:

  • spatial mapping of crime hotspots,
  • surveillance-enhanced datasets,
  • inferred behavioral patterns.

If adversaries reconstruct models by observing outputs (e.g., crime heatmaps), this case becomes relevant.

Governance implication

  • Publishing predictive outputs (heatmaps, risk zones) may indirectly expose model logic,
  • requiring careful output “sanitization” or aggregation.

7. Ruckelshaus v. Monsanto Co. (1984)

Core issue

Whether government disclosure of submitted pesticide data constituted a taking of trade secrets.

Key holding

Trade secrets are property rights protected under the Fifth Amendment (US context), but disclosure rules may apply when voluntarily submitted.

Relevance to predictive policing AI

In Norway:

  • private AI vendors may supply models to police,
  • but government transparency rules may require partial disclosure.

Governance implication

  • Contracts must define:
    • what parts of AI models remain confidential,
    • what must be disclosed for accountability,
  • especially when algorithms influence policing decisions affecting liberty.

8. United States v. Aleynikov (2012)

Core issue

Former Goldman Sachs programmer stole high-frequency trading code.

Court reasoning

  • Code must be “designed for interstate commerce” under specific statutes for criminal liability,
  • but trade secret misappropriation still applied under civil law principles.

Relevance to predictive policing AI

Predictive policing algorithms often resemble:

  • high-frequency decision systems,
  • real-time scoring of crime risk.

Governance implication

  • Even if criminal liability thresholds vary, civil trade secret protection is strong,
  • reinforcing need for strict cybersecurity and access monitoring.

Synthesis: Trade Secret Governance in Norwegian Predictive Policing AI

1. What is protected

Based on these cases, protected trade secrets likely include:

  • crime prediction models (ML weights, architectures),
  • feature engineering logic (risk indicators),
  • dataset enrichment pipelines,
  • deployment heuristics used by police dashboards.

2. Key legal risks

  • employee mobility (Papermaster doctrine),
  • vendor-to-vendor leakage (DuPont + Kolon logic),
  • system-level replication (Motorola principle),
  • inference attacks from outputs (Christopher doctrine).

3. Transparency conflict (unique to policing)

Unlike private AI:

  • predictive policing affects liberty and surveillance rights,
  • courts are more likely to require algorithmic accountability

So Norway must balance:

  • trade secret protection (commercial/vendor interest)
    vs
  • due process and fairness (constitutional/admin law principles)

4. Governance best practices derived from case law

  • compartmentalized model architecture access
  • strict logging of dataset access (Waymo principle)
  • post-employment restrictions for key engineers (IBM principle)
  • clean-room model development when switching vendors (DuPont + Kolon principle)
  • controlled output disclosure to prevent reverse engineering (Christopher principle)

Final insight

Across all these cases, a consistent legal pattern emerges:

Courts protect AI systems as trade secrets not because they are “software,” but because they are complex, layered decision systems where value lies in hidden optimization, not visible code.

In Norwegian AI-based predictive policing, this creates a legal paradox:

  • the more accurate and valuable the system, the more likely it is to be treated as a protected trade secret,
  • but the more it affects public rights, the stronger the pressure for transparency.

LEAVE A COMMENT