Sándor Laki
Sándor Laki
Habil. Associate Professor
Contact details
Address
1117 Budapest, Pázmány Péter sétány 1/c.
Room
2.506
Phone/Extension
8477
Links
  • 1.2 Computer and information sciences
    • Computer sciences
  • 2. Engineering and technology
    • 2.2 Electrical engineering, Electronic engineering, Information engineering
      • telecommunications
Computing in the network

Computing in the network is a new research area that has emerged over the past few years. In-network computing refers to the execution of programs typically running on end-hosts within network devices. The computation in this model is done within the network, using devices that have already been deployed and are already used to forward network traffic. In contrast to traditional network computing where the computation is done by computers deplyoed in a network, the computations are performed by programmable switches. The two key benefits of in-network computing are high throughput and low latency. Nowadays, switch ASICs process up to ten billion packets per second, supporting billions of operations per second per offloaded application. These switches are designed as pipelines, working on streams of packets without stalls. In general, they support sub-microsecond latency with low variance in non-overloaded situations. In in-network computing transactions are terminated within their network path rather than reaching an end-host (e.g. a server). In this way, the latency introduced by the end-host can be saved, significantly reducing the response time. It has recently been demonstrated by existing applications that in many scenarios 10 000 times higher performance can be reached with performing computation in the network compared to their server-based counterparts.

Foundations of Zero-Touch Computer Networks

With the advent of programmable data planes [1,2], a new era has begun in computer networking, transforming networks from packet forwarding infrastructures to programmable end-to-end platforms. The main idea of Software-Defined Networking 1.0 (SDN 1.0) was the separation of data and control planes in switching devices and opening the control plane supervised by a logically centralized software controller. In addition to the control plane programmability of SDN 1.0, data plane programmability allows network operators to get full control over their infrastructure. They cannot only describe how the control plane applications need to fill tables in the packet forwarding devices, but they can define how these devices should handle or process packets. In contrast to SDN 1.0, modifying the low-level packet processing does not require redesigning the underlying switching microchip (ASIC); it can easily be described in software. As Nick McKeown said in his keynote at NetDev 0x14: „We will no longer think in terms of protocols. Instead, we will think in terms of software. All functions and “protocols” will have migrated up and out of hardware into software. Throughout the Internet.”. In future programmable networks, the control plane can continuously monitor the network states and react to them in many ways. This extended visibility will lay down the foundations of self-driving and intent-based networks. Deeply programmable networks [3] will reshape computer networks in the future and pave the road towards unforeseen innovations in the field. However, they will also result in many challenges relating to security, reliability, and trust in the networking software. Our research focuses on these challenges, aiming to answer different research questions from the following fields: * New network algorithms for intent-based traffic engineering solely in the data plane * Programming network as a big switch, disaggregation of data and control plane programs * Utilization-aware routing * In-network acceleration of various applications * Anomalous and malicious traffic behavior detection and mitigation in the data plane * Detection and diagnosis of performance problems by measurements in the data plane * Fast recovery after failure detection [1] Pat Bosshart, Dan Daly, Glen Gibb, Martin Izzard, Nick McKeown, Jennifer Rexford, Cole Schlesinger, Dan Talayco, Amin Vahdat, George Varghese, David Walker, P4: Programming Protocol-Independent Packet Processors, ACM Sigcomm Computer Communications Review (CCR). Volume 44, Issue #3 (July 2014) [2] P4 Consortium: http://p4.org

Erőforrás-elosztás és szolgáltatásminőség telekommunikációs hálózatokban

Despite extensive research and standardization in the area of Quality of Service (QoS), most of the developed solutions have not been deployed in practice. Proponents of overprovisioning argue that it is much easier and more efficient to add capacity when needed than to build and maintain complex QoS mechanisms that only provide minor improvement during congestion. Network congestion together with the ways of avoiding its impacts on the end users is a recurrent aspect along time, especially nowadays where the network is becoming a critical societal asset. Recently, the unfortunate pandemic situation due to COVID-19 made evident this critical fact, revealing the need of defining proper mechanisms for improving its robustness and ensuring sufficient quality of experience for end users.

We work on a core-stateless resource sharing framework called Per Packet Value. More details are available at http://ppv.elte.hu