8-10 June 2020
Indico / zoom
Europe/Berlin timezone

Big Data Virualization: why and how?

10 Jun 2020, 09:00
Indico / zoom

Indico / zoom

https://zoom.us/j/98141351045?pwd=SHlYK1VOSk1WdTBwbmhoamhJZndQUT09 Passwort: DLC-2020 Meeting-ID: 981 4135 1045


Prof. Alexander Bogdanov (St.Petersburg State University)


The fact that over 2000 programs exist for working with various types of data, including Big Data, makes the issue of flexible storage a quintessential one. Storage can be of various types, including portals, archives, showcases, data bases of different varieties, data clouds and networks. They can have synchronous or asynchronous computer connections. Because the type of data is frequently unknown a priori, there is a necessity for a highly flexible storage system, which would allow to easily switch between various sources and systems.
Significant part of the problems can be solved if we use the paradigm of the Virtual Personal Supercomputer, which was developed for computing, but also used to build a framework for distributed ledgers. The idea of this approach is to virtualize not only the processing itself, but also the entire field on which the processing is performed, namely the network, file system and shared memory. This allows you to create a single image of the operating environment, which simplifies the user’s work and increases the processing speed.

Primary authors

Prof. Alexander Bogdanov (St.Petersburg State University) Prof. Alexander Degtyarev (St.Petrsburg State University) Prof. Nadezhda Shchegoleva (St.Petrsburg State University) Dr. Vladimir Korkhov (St.Petrsburg State University) Mr. Valery Khvatov (DGT Technologies AG.)

Presentation Materials

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now