Last year I watched a friend edit a photo on her phone. She removed a person from the background with a single tap. She did not know she was using generative AI. As far as she was concerned, it was just a feature of the camera app. That moment has been stuck in my head since, because it captures something important about where we are with artificial intelligence right now.
The AI you do not choose
Most of the conversation about AI focuses on tools people deliberately pick up: chatbots, image generators, summarisers. Call that explicit AI. You open it, you use it, you evaluate the output, you close it. You know what you are doing.
But there is a second category that gets far less attention, and it matters more. Invisible AI is the AI already embedded in tools you use every day, often without any clear disclosure. Your email sorts itself, drafts suggested replies, flags messages as important. Google's search results now open with an AI-generated summary before you see any actual links. Microsoft Copilot is threaded through Word, Excel, PowerPoint, and Teams. Apple, Google, and Samsung all ship phones where the "camera" routinely adds, removes, or alters elements of a photograph using generative models.
None of this is inherently wrong. But the lack of transparency is a real problem. When AI operates invisibly, you lose the ability to question its outputs, understand its limitations, or make an informed choice about whether to trust it. For organisations working with vulnerable populations, handling sensitive data, or operating in regulated environments, that matters. A staff member researching safeguarding policy through an AI-assisted search engine may not realise the summary contains fabricated information. A donor's "photograph" may have been silently enhanced by on-device AI. Under GDPR, you need to know whether your email provider is sending message content to a third-party model. These are not hypothetical concerns. They are happening now, in ordinary workflows, at organisations that have never made a conscious decision to adopt AI.
The EU AI Act, which began phased implementation in 2025, directly addresses this. AI systems that interact with people must disclose that fact. AI-generated content must be marked in a machine-readable format. High-risk systems (including those used in education, employment, and public services) face additional transparency requirements. The direction is clear: transparency is becoming a legal requirement. Organisations that understand this now will be better prepared than those that wait. I wrote more about the broader regulatory picture in Data Sovereignty, AI, Copyright, and Consent.
A plain-language glossary
The terminology around AI can feel like gatekeeping. If you do not know what an LLM is, the implication goes, you should not have opinions about AI policy. That is nonsense. Here are the terms that actually come up when I am working with charities, museums, and social enterprises. Each one is explained in enough detail to be useful, and no more.
Machine Learning (ML)
A way of building software where, instead of writing explicit rules, you give the system a large amount of data and let it find patterns on its own. Most of what people call AI today is actually machine learning.
Foundation Model
A very large system trained on an enormous, broad dataset (often most of the publicly available internet) to develop general capabilities. GPT-4, Llama, and Claude are all foundation models. Building one from scratch costs millions of pounds in computing power, which is why only a handful of organisations do it.
Large Language Model (LLM)
A type of foundation model trained specifically on text. LLMs predict the next word in a sequence, over and over, which turns out to be surprisingly effective at generating coherent writing, answering questions, and summarising documents. The "large" refers to billions of adjustable values inside the model.
Training Data
The information used to build a model. For an LLM, this is typically a vast collection of text: books, websites, academic papers, forum posts. The quality and composition of training data profoundly affects what the model can do and what biases it carries. Much of this data has been collected without the knowledge or consent of the people who created it.
Confabulation (often called "hallucination")
When an AI system generates information that sounds plausible but is factually wrong. The industry calls this "hallucination," but confabulation is more accurate. Hallucination implies the system is perceiving something that is not there. Confabulation, a term borrowed from neuroscience, means producing false information without the intent to deceive. An LLM does not "see" things. It generates statistically likely sequences of words, and sometimes those sequences are wrong. This is not a bug that will be fixed with the next update. It is a structural feature of how these models work, and it is why human review remains essential.
Generative AI
AI systems that create new content: text, images, audio, video, code. This is the category that includes chatbots, image generators, and music generators. The word "generative" distinguishes these from AI systems that classify, sort, or predict (such as a spam filter or a fraud detection system).
Synthetic Media and Deepfakes
Any media created or substantially altered by AI is synthetic media. A photograph generated by Midjourney, a voice cloned from three seconds of someone's real speech, a news article written by an LLM. The term is deliberately neutral: synthetic media is not inherently harmful, but it becomes dangerous when presented as authentic. Deepfakes are a specific subset where AI creates a convincing fake of a real person, usually in video or audio. In 2024, a finance worker in Hong Kong transferred $25 million after a video call with what appeared to be the company's CFO but was a deepfake. Detecting this kind of manipulation requires forensic analysis of the sort Jura Trace is designed to provide.
Edge AI / On-Device AI
AI that runs on the device in front of you rather than on a remote server. This matters for privacy, because your data never leaves the device. It also matters for speed, reliability, and cost. Jura Trace is built on this principle: everything runs locally, nothing is uploaded anywhere.
Open Source vs Open Weight
Often used interchangeably, but they mean different things. Open source traditionally means the code, the data, and the methodology are all published so anyone can inspect, modify, and redistribute them. Open weight means the model's trained parameters are published, but the training data and full methodology may not be. Most "open source" AI models are actually open weight. The distinction matters because open weight models can be audited to a point, but not fully reproduced or independently verified.
What to do with this
You do not need to become an AI expert. But you do need to know enough to ask the right questions. Start by auditing your invisible AI. Look at the software your organisation already uses and check whether AI features have been enabled by default. Your email provider, your office suite, your phone's camera app, your web browser. You may be surprised.
From there, consider a simple AI policy. It does not need to be long. At minimum it should cover which tools staff are permitted to use, what data can and cannot be entered into AI systems, and how AI-generated content should be labelled. The goal is informed use, not prohibition. Ask your software vendors whether AI features are active and where the processing happens. Under GDPR and the EU AI Act, that is not an unreasonable question. It is due diligence.
And for content your organisation relies on (evidence, reports, photographs, communications from partners) consider whether verification is needed. AI-generated and AI-altered content is becoming harder to distinguish from authentic material. If your organisation needs guidance on any of this, Jura Labs offers advisory services tailored to non-profits, charities, and purpose-driven organisations. If you need to verify whether content is authentic or AI-generated, Jura Trace is built for exactly that.
References
- European Parliament, Regulation (EU) 2024/1689 (EU AI Act), Official Journal of the European Union. eur-lex.europa.eu/eli/reg/2024/1689
- Meta AI, Llama 3.1 Model Card, 2024. github.com/meta-llama/llama-models
- UK Information Commissioner's Office, Guidance on AI and Data Protection, 2024. ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence
- CNN, Finance worker pays out $25 million after video call with deepfake "chief financial officer", February 2024. edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
- Ada Lovelace Institute, Foundation Models in the Public Sector, 2024. adalovelaceinstitute.org
- Stanford HAI, AI Index Report 2025. aiindex.stanford.edu