A world of spatiotemporal data
Existing data architecture wasn't designed to manage billions of data records that are created by moving devices as they change location over time.
Instead of finding workarounds for systems that have become slow and expensive, data teams can now make full use of new and historic data with purpose-designed technology.
How does it work?
More about the platform
Based on a highly optimized database kernel containing multiple technologies and unique architectural features, the platform provides an unparalleled foundation and scalable solution for processing high velocity, high volume spatiotemporal workloads.
State-of-art, thread-per-core architecture, vectorised storage model and user space scheduling of I/O and execution combine with the ability to simultaneously ingest, index and query streaming workloads.
Run spatiotemporal queries on your data including entity ID, latitude and longitude and date/time.
Ingest and filter on up to 8 custom attributes to quickly find records of interest
Store and retrieve unlimited attributes that your data may have
The platform sits alongside your existing data infrastructure....
Real time engine
Data lakes/ Warehouse
Real time apps
Real time events
Processing Data in General System vs PostGIS
Technology that is purposed-designed for managing spatio-
temporal data can provide significant benefits.
PostGIS - based on ingestion of 17 billion AdTech data records on a db.r6g.16xlarge server with 5 metadata fields.
General System based on ingestion of 101 billion AdTech data records on
an i3en.12xlarge server with 6 metadata fields. Costs include ETL and ingestion costs.
Deep dive into the platform's unique architecture
4-dimensional index design enables diverse and unrelated spatiotemporal datasets to be easily aggregated. Run complex spatiotemporal queries such as polygon relationships and geofencing on 10-100 billion records in seconds.
Thread-per-core architecture with I/O scheduler designed for high dimension indexes uses its visibility and control over storage access and cache to reorder, optimise and reduce storage operations.
Continuous, adaptive, background re-sharding distributes the data workload evenly to mitigate overloads and bottlenecks.
A high-density storage architecture creates a non-POSIX file system that coexists with standard Linux environments to bypass traditional scalability and performance limitations. The platform processes petabytes of data with consistent performance across millions of logical files while implementing additional features that conventional file systems do not support.
Check out the speed and scale of the solution with a synthetic dataset of 1.6 million vehicles in London. The data is based on vehicle number plates with historic movements loaded in time order to realistically simulate a real-time feed of 92.4 billion datapoints.
Supported data types
Augment streaming or batch spatiotemporal data from multiple sources
Robots and drones
High volume or automated financial transactions
IP packets and IP traffic patterns
Static sensors that track moving objects
Series of cameras such as number plate recognition
Indexing pixels or tiles to retrieve specific features at scale