The Hardware We Use: Graphene Active Electronically Scanned Array (AESA)
Synthetic Aperture Radar is utilized to 3-Dimensionally map the non-transmitting environment. The 3D mapping is accurate enough to visualize a grain of rice sized object fall out of someone’s pocket onto the floor in real time. A multitude of use cases exist for this technology ranging from observing foot traffic data in and out of stores regardless of whether or not the individual has a transmitting device on their person, observing an apple falling off the table in a grocery store, to tracking a drone, missile, or aircraft flying above a secure area in the sky. The possible use cases are endless.
Synthetic Aperture Radar
We use machine learning and state estimation techniques to train the algorithm to recognize individual words, phrases, and sounds, so that the system can continue to detect audio from radio waves. The result will be the ability to detect and analyze (at a receiving station) sounds that are generated at the transmission site, as though these sounds were picked up by a microphone (although they would not be).
Demodulation of Sound from Radio Waves
This game-changing algorithm allows us to drastically increase our tracking accuracy. There are currently no other players that intend to implement this for consumer or commercial non-defense-related products.
The role of the Kalman Filter is to take the current known state (i.e. position, heading, speed and - possibly - acceleration) of the target and predict the new state of the target at the time of the most recent radar measurement. In making this prediction, it also updates its estimate of its own uncertainty (i.e. errors). It then forms a weighted average of this prediction of state and the latest measurement of state, taking account of the known measurement errors of the radar and its own uncertainty in the target motion models. Finally, it updates its estimate of its uncertainty of the state estimate.
Kalman Filter
This algorithm allows us to make sense of large data sets and keep each tracked devices calculations isolated from the others.
The Joint Probabilistic Data Association Filter (JPDAF) allows the sensor to collect many transmissions from many different devices and associate which transmissions are coming from a single device.
Joint Probabilistic Data Association Filter (JPDAF)
This allows the sensor to determine the magnitude or distance a transmitting device is from the sensor.
Time Difference of Arrival (TDOA)
This allows the sensor to determine which direction a device is transmitting from (bearing).
Our technology combines information gathered with information from the antenna array, such as Angle of Arrival (AoA), calibrated to time, with Timed-Delay of Arrival (DOA).
The Angle of Arrival (AoA) Algorithm
Our technology uses the Multiple Signal Classification (MUSIC) algorithm and probabilistic association to lock onto a signal. This is used to isolate a single transmitting device's signal in an environment where many different unrelated devices are transmitting. One wireless transmutation from a phone might bounce off the walls and other objects in an environment and reach our sensor 7 or more different times. A MUSIC algorithm allows our sensor to associate each of the transmissions with a single device, separating them from other transmissions that unrelated devices are transmitting. The MUSIC algorithm also allows the sensor to operate in an incredibly noisy environment.
MUSIC Algorithm (Multiple Signal Classification)
In order to improve accuracy, we perform both Trilateration and Triangulation – which is not common.
Our technology uses 3D Trilateration and Triangulation because we measure the tiny changes in Doppler phase-shift being emitted by the transmitter’s antenna.
We are using more than four antennas, which means we are technically performing multi-lateration.
3D Trilateration & Triangulation
* It is legal to passively track WiFi and Bluetooth transmissions, but not cellular.
Passive Tracking: If a customer has not opted-in (i.e. does not have the app), the sensor passively tracks his path throughout the store. We can detect items he’s idled in front of, collect the data from the item and later, use it, to either ensure the item is still in position or to remind him of his interest in the item.
Active Tracking: If a customer has opted-in (through the app), we can derive the same information, but in a way that is fully transparent to the user.*
Passive Tracking vs. Active Tracking
Our technology is the only technology on the market that looks as far as the packet.
We have the ability to analyze raw signals and, more specifically, the packet inside of each signal. This allows us to passively track any transmitting device.
Omni - directional Propagation
The Capabilities of Our Wireless Infrastructure Eclipse the Traditional WiFi Router
Our first technology suite is comprised of cutting edge base stations and asset tracking tags, enabling a plethora of unique product features which will be deployed across our three subsidiaries. VIDI will focus on the B2B market, VICI will focus on defense in the public sector market, and VENI will focus on the B2C market. Each subsidiary will target their respective market segments with advanced solutions such as passive and active centimeter-level tracking of transmitting devices, innovative gesture systems, augmented reality frameworks, real-time real-space directional Wi-Fi, and an augmented reality social networking platform. Additionally, they will offer commercial wireless networking infrastructure, 3D mapping with active electronically scanned array radar systems, advanced defense data intelligence collection, physical and cyber security products, and an aero-ballistic positioning system. This technology suite is just the first in a series of groundbreaking innovations that we have planned.
Technology Suite 01
Use Cases
B2B
Use Cases
Defense & Aerospace
Asset Tracking of consumer items like apple air tag as well as system to allow consumers to tell USPS / UPS / FedEx where to leave a package.
Use Cases
B2C