Artificial Intelligence law at Slovakia

⚖️ CASE 1 — AI in the Workplace and Employee Monitoring

Court: District Court of Bratislava
Issue: Can employers use AI tools to monitor employee productivity without consent?

Facts:
A logistics company introduced an AI system to track warehouse workers’ movements, scanning efficiency, and task completion rates. Employees complained that the monitoring violated their privacy and labor rights.

Court Ruling:
The court ruled that AI monitoring without employee consent and without proper transparency violates Slovak labor law and GDPR. Key requirements:

Employees must be informed about what data is collected and how it is used.

Monitoring must be proportionate; it cannot infringe on human dignity.

Works council or employee representatives must be consulted if monitoring affects working conditions.

Implication:
Employers in Slovakia must ensure transparency, consent, and oversight before deploying AI-based monitoring systems.

⚖️ CASE 2 — AI-Assisted Judicial Decisions

Court: Regional Court of Košice
Issue: May judges rely on AI tools for case analysis and verdict drafting?

Facts:
A court pilot-tested an AI system that analyzed previous case law to suggest potential rulings in civil disputes. One party argued that the judgment was invalid because the AI influenced the outcome.

Court Ruling:
The court recognized that:

AI can assist judges in research and drafting,

Final decisions must remain fully human,

Judges must document their reasoning independently of AI suggestions.

Implication:
AI is allowed as a support tool, but judges cannot delegate judgment to AI. Slovak courts require human validation and explanation.

⚖️ CASE 3 — Liability for AI-Caused Damage

Court: District Court of Trnava
Issue: Who is responsible if an AI system causes harm?

Facts:
A self-driving delivery robot, operated by a private company, collided with a pedestrian. The injured party sued the company and the AI developer.

Court Ruling:
Slovak courts follow a human-centric liability model:

The company operating the AI is primarily liable if it failed to supervise the system,

Developers can be liable if the AI had a design or programming defect,

AI itself cannot be held responsible because it is not a legal person.

Implication:
Any company in Slovakia using AI for transport, healthcare, or industrial systems must carefully assess liability, supervision, and insurance coverage.

⚖️ CASE 4 — Algorithmic Discrimination in Hiring

Court: Regional Court of Bratislava
Issue: Can AI-based recruitment systems be used without fairness audits?

Facts:
A recruitment firm used an AI tool to filter CVs. Candidates from certain regions or age groups were systematically rejected. A candidate sued for indirect discrimination.

Court Ruling:
The court decided that:

Employers remain responsible for discriminatory outcomes, even if caused by AI,

AI tools must be audited regularly to prevent bias,

Employers must provide transparency about the use of automated decisions.

Implication:
Companies in Slovakia must conduct algorithmic fairness audits and document measures against bias when using AI in HR processes.

⚖️ CASE 5 — AI-Generated Creative Works and Copyright

Court: District Court of Nitra
Issue: Are AI-generated artworks protected under Slovak copyright law?

Facts:
An artist used an AI program to generate digital paintings. A company reproduced the AI-generated images without permission, claiming they were not human-created and therefore not copyrightable.

Court Ruling:

Works solely generated by AI without significant human input are not copyrightable.

If a human provides substantial creative input (e.g., selecting, editing, guiding AI output), copyright can apply.

The court protected the artist’s rights because he demonstrated creative decisions in the process.

Implication:
In Slovakia, human creative contribution is required for copyright protection of AI-generated content.

⚖️ CASE 6 — Public Administration AI Transparency

Court: Slovak Supreme Administrative Court
Issue: Can public authorities use opaque AI for decision-making?

Facts:
A municipal office used an AI system to automatically reject social benefit applications. Citizens complained that the algorithm was secret and they had no explanation for rejection.

Court Ruling:

AI decisions in public administration must be transparent and explainable.

Citizens are entitled to understand the reasoning behind automated decisions.

The court annulled the office’s decisions due to lack of transparency.

Implication:
Government AI systems in Slovakia must provide explainable, auditable decisions affecting individuals.

Key Takeaways for Slovakia:

AI liability rests with humans and corporations, not the AI itself.

Employee monitoring via AI requires consent, transparency, and proportionality.

AI can assist judges and public bodies, but human control is mandatory.

Algorithmic discrimination in hiring is legally actionable; fairness audits are required.

AI-generated works require significant human creative input for copyright protection.

Public administration AI must be explainable; opaque decisions are invalid.

LEAVE A COMMENT