Virtual Data Explorer is a set of software components, that allow you to visualize and explore your computer network topology as a set of 3D data-shapes, using Virtual and / or Mixed Reality headsets.
Our brain is really good at perceiving objects in Real Reality, say, the shape of your hand. But its not that good at grasping the precise three-dimensional shape of your hand on the screen of your computer. Hence its quite tricky to have 3D data visualizations as part of our everyday data-analysis workflow, if we’d have to ingest these from a flat screen.
Behold: the umpteenth-generation XR headsets! Now these fancy things are (finally) able to provide us with the capability to immerse in stereoscopically perceivable data visualization. This allows us to create (non-geospatial!) network topology visualizations that map to our (or your NOC/SOC operator’s) understanding of the sets of networked entities (say, computers, toasters, drones, nukes, roombas, etc.) that are participating in the to-be protected networks.
No pixie dust. No rainbows. No unicorn skeletons either.
VDE has 3 components:
Unity 3D is used to create the software running in / for the headsets, C# for the backend, few lines of javascript for the browser plugin.
For a data visualization to be useful and efficient, we need to align that to our internalized understanding of the data that we need to understand, explore, monitor - extract information from.
In the Mental Model Mapping Method for Cybersecurity paper we described a method for interviewing Subject Matter Experts, to extract their implicit and explicit understanding of the data that they work with, to create useful, interactive, stereoscopically perceivable visualizations.
3D visualization may look fancy and scifi, but must be useful. Hence the process for creating a useful visualization should be used when creating data layouts for the VDE.
Please do read the papers where the reasoning behind the creation of the software, the topology layouts and the method for creating these and other layouts are discussed.
13th International Conference on Cyber Warfare and Security
Enhancing Cyber Defense Situational Awareness using 3D Visualizations
18th European Conference on Cyber Warfare and Security
Operator Impressions of 3D Visualizations for Cybersecurity Analysts
The 14th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of HMS
VR/MR Supporting the Future of Defensive Cyber Operations
NATO CA2X2 Forum (Computer Aided Analysis, Exercise, Experimentation)
VR/MR Supporting the Future of Defensive Cyber Operations
22nd International Conference on Human-Computer Interaction
Mental Model Mapping Method for Cybersecurity
JDST Vol.4: Big Data Challenges – Situation Awareness and Decision Support
Interactive Stereoscopically Perceivable Multidimensional Data Visualizations for Cybersecurity
NATO STO Technical Report STO-TR-IST-141
Exploratory Visual Analytics
International Conference on Human-Computer Interaction, HCII 2022
User Interactions in Virtual Data Explorer
Frontiers Big Data; Cybersecurity and Privacy
A 3D mixed reality visualization of network topology and activity results in better dyadic cyber team communication and cyber situational awareness
PhD thesis at TalTech
Interactive Stereoscopically Perceivable Multidimensional Data Visualizations for Cybersecurity
Keep in mind, that understanding 3D structures shown on a flat screen is much harder for the brain than observing these in Mixed, Virtual or Real Reality. But before a WebXR demo, you’ll have to do with pics and vids.
You’ll see the logical topology of networked entities that were active during the NATO CCDCOE Locked Shields exercise. The topology is overlaid with network traffic, edges representing the number of sessions observed during a set time-window. Data was ingested from Moloch.
VDE v1 is integrated in the VRDAE, that is developed by the United States Army Command, Control, Communication, Computers, Cyber, Intelligence, Surveillance and Reconnaissance Center. Please read more about related projects: here and here.
M4C was presented at the 22nd International Conference on Human-Computer Interaction conferece. You'll find a bit more information on that topic here.
Presented at the MAVRIC conferece. Contains captures from both, VDE v1 & v2, also walkthrough of the networks' configuration file format.
Paper was presented at the 24th International Conference on Human-Computer Interaction conferece.
VDE v1 was released only as a component of VRDAE.
VDE v2 is integrated into MRET, hence you can use it within a MRET project.
Independent builds are available for Vision Pro, HoloLens, Oculus Quest, Magic Leap, HTC Vive upon request.
If you would like to try out VDE v2 with your datasets, please do reach out and let’s explore the options.