Cloud LLM
A language model accessed via a third-party provider's API (OpenAI, Anthropic, Google, others), where inference happens on the provider's infrastructure and content is sent to the provider for processing.
How it works
Cloud LLMs are the fastest path to capability and the most common entry point for UK businesses experimenting with AI. They are also the architecture that introduces the most regulatory and contractual exposure for serious enterprise deployment. The standard commercial contract sends content to the provider for inference, which is a data export under UK GDPR. For unregulated, non-PII, non-IP-sensitive workloads, cloud LLMs are fine. For FCA-regulated, SRA-regulated, NHS, ITAR-sensitive, or OEM-contracted workloads, cloud LLMs without additional architectural controls (PII redaction gateways, contractual zero-retention, regional API endpoints) are usually unacceptable. Ayoob AI deploys cloud LLMs only where the data-handling architecture supports the regulatory position.
Related terms
Private AI
AI deployed on infrastructure the client controls (on-premise, in the client's cloud tenancy, or air-gapped), with no third-party LLM provider in the data path and no inference-time data export.
On-Premise AI
AI deployed on hardware the client owns and operates inside their own data centre or office facility, with no dependency on external cloud or model providers for inference.
Data Residency
The geographic location where data is stored and processed, with regulatory requirements (UK GDPR, sector-specific rules) often constraining where personal or regulated data can travel.
Want to see this technology in action?
Book a Discovery Call