Edge Vision and AI
Research Objectives
Develop innovative and efficient deep learning algorithms that are adaptive and well-suited for big data analytics such as visual object detection and recognition tasks, with a focus on their usability for on-device AI.
Enhance the capability of machines to extract meaningful knowledge from large-scale datasets using end-to-end learning approaches, eliminating the need for extensive manual annotations.
Investigate data-centric methodologies that leverage advanced computing technologies to improve the accuracy and efficiency of knowledge extraction in the context of big data analytics.
Strengthen the resilience of AI models against various challenges, including data faults, noise, and cyberattacks to ensure reliable and trustworthy results.
Enable seamless integration and deployment of machine learning and visual analytics algorithms in real-world big data analytics applications, such as predictive modeling, anomaly detection, and decision support systems.
Efficient Deep Learning Algorithms (tinyDL)
Computer vision (CV) technologies have made impressive progress in recent years due to the advent of deep learning (DL) and Convolutional Neural Networks (ConvNets), but often at the expense of increasingly complex models needing more and more computational and storage resources. ConvNets typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. Thus there is a need to develop small ConvNets that provide higher performance, smaller memory footprint, and faster development times. My research involves identifying and exploiting the trade-offs between computation, image resolution, and parameter count in order to develop the most efficient neural network models. Specifically, it aims at the development of light-weight and general purpose convolutional neural networks for various vision tasks. One particular research direction is to explore efficient operators (group point-wise and depth-wise dilated separable convolutions) to learn representations with fewer FLOPs and parameters.
Selected Publications:
Christos Kyrkou, "Toward Efficient Convolutional Neural Networks with Structured Ternary Patterns", Transactions of Neural Networks and Learning Systems, 2024.
publisher / arxiv / zenodo / github / datasetChristos Kyrkou, George Plastiras, Stylianos Venieris, Theocharis Theocharides, Christos-Savvas Bouganis, "DroNet: Efficient convolutional neural network detector for real-time UAV applications," 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, pp. 967-972, March 2018.
publisher /zenodo /arxiv / github1 / github2Christos Kyrkou, “YOLOPeds: Efficient Single-Shot Pedestrian Detection for Smart Camera Applications”, IET Computer Vision, 2020, 14, (7), p. 417-425, DOI: 10.1049/iet-cvi.2019.0897
publisher / arxiv / zenodo / github / youtube
Visual Data Analysis and Detection
Object Detection is one of the most fundamental computer vision tasks; where the goal is to localize the object within an image. This task is especially crucial in areas where the object size is small and a large image needs to be processed. My research efforts in this area have been towards the development of efficient algorithms for image search to direct the detection process. Applications, in this area include unmanned aerial vehicles for traffic monitoring and analytics.
Selected Publications:
Rafael Makrigiorgis, Nicolas Hadjittoouli, Christos Kyrkou, Theocharis Theocharides, "AirCamRTM: Enhancing Vehicle Detection for Efficient Aerial Camera-based Road Traffic Monitoring",Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2119-2128, 2022.
publisher / zenodo / cvf / youtube1 / youtube2 / datasetAlexandros Kourris, Christos Kyrkou, Christos-Savvas .Bouganis, “Informed Region Selection for Efficient UAV-based Object Detectors: Altitude-aware Vehicle Detection with CyCAR Dataset”, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 51-58, 3rd place in the 4th IEEE UK&I Robotics and Automation Society (RAS) Conference’s Poster Competition
publisher / zenodo / dataset / youtube
End-to-end learning approaches: Perception-based Control
Smart camera systems in a wide spectrum of machine vision applications including video surveillance, autonomous driving, robots and drones, smart factory and health monitoring. By leveraging recent advances in deep learning through convolutional neural networks, we can not only enable advanced perception through efficient optimized ConvNets, but also enable the direct control of cameras for active vision without having to rely on hand-crafted pipelines that incorporate various aspects of detection-tracking-control.
Selected Publications:
Christos Kyrkou, “C^3Net: End-to-End deep learning for efficient real-time visual active camera control” , Journal of Real-Time Image Processing vol. 18, no. 4, pp. 1421-1433, August 2021. Special Issue on Artificial Intelligence and Machine Learning for Real-Time Image Processing) https://doi.org/10.1007/s11554-021-01077-z
publisher / arxiv / zenodo / youtubeChristos Kyrkou, "Imitation-Based Active Camera Control with Deep Convolutional Neural Network", IEEE International Conference on Image Processing Applications and Systems (IPAS), December 2020. Best Paper Award🌟
publisher / zenodo / arxiv / youtube
Strengthen the resilience of AI models: Robustifying automated vision systems against Attacks
State-of-the-art deep learning models used in computer vision are susceptible to adversarial attacks which seek small perturbations of the input causing large errors in the estimation by the perception modality. The use of AI/ML-based techniques in detection and possibly mitigation of dynamic cyber-attacks on the camera system/data in the context of automated vision systems is a promising area. We have developed a deep learning approach for detecting out of distribution data points and restoring them to a given normal state. The approach utilizes a deep convolutional autoencoder that simultaneously learns to produce an undistorted version of an image and also detect the presence of adversarial artifacts in an image.
Selected Publications:
Andreas Papachristodoulou, Christos Kyrkou, Theocharis Theocharides, "DriveGuard: Robustification of Automated Driving Systems with Deep Spatio-Temporal Convolutional Autoencoder", IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, January 2021, pp. 107-116.
publisher / zenodo / arxiv / cvf / youtube
Visual Understanding for Emergency Monitoring
Early detection of hazards and calamities (e.g., wildfire, collapsed building, flood) is of utmost importance to ensure the safety of citizens and fast response time in case of a disaster. Towards this direction the problem of visual scene understanding for disaster classification is tackled through deep learning. A novel dataset called AIDER (Aerial Image Database for Emergency Response) is introduced and EmergencyNet small deep neural network is developed to classify aerial images and recognize disaster events. The small DNN can run on the processing platform of a UAV to improve autonomy and privacy.
Selected Publications:
Demetris Shianios, Christos Kyrkou, Panagiotis Kolios, "A Benchmark and Investigation of Deep-Learning-Based Techniques for Detecting Natural Disasters in Aerial Images", Computer Analysis of Images and Patterns, CAIP 2023, Lecture Notes in Computer Science, vol 14185, September, 2023. https://doi.org/10.1007/978-3-031-44240-7_24
publisherChristos Kyrkou and Theocharis Theocharides, "EmergencyNet: Efficient Aerial Image Classification for Drone-Based Emergency Monitoring Using Atrous Convolutional Feature Fusion," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 1687-1699, 2020, doi: 10.1109/JSTARS.2020.2969809.
publisher / zenodo / youtube / dataset / CVonline / github / code ocean / demoChristos Kyrkou, Theocharis Theocharides "Deep-Learning-Based Aerial Image Classification for Emergency Response Applications using Unmanned Aerial Vehicles", CVPR 3d International Workshop in Computer Vision for UAVs, Long Beach, CA, 16-20 June, 2019, pp. 517-525.
publisher / zenodo / arxiv / cvf / youtube / dataset
Seamless integration and deployment: Intelligent Multi-camera Video Surveillance
Cameras mounted on aerial and ground vehicles are becoming increasingly accessible in terms of cost and availability, leading to new forms of visual sensing. These mobile devices are significantly expanding the scope of video analytics beyond traditional static cameras by providing quicker and more effective means such as wide area monitoring for civil security and crowd analytics for large gathering and events. Combining stationary cameras with moving cameras enables new capabilities in video analytics, at the intersection of Internet of Things, Smart Cities, and sensing.
My initial postdoctoral research focused on networked smart cameras with on-board detection capabilities and how to develop optimization and collaboration algorithms that enable the overall improve performance of the system. A generic and flexible probabilistic camera detection model was formulated capable of capturing the detection behavior of the object detection modules running on smart cameras. On top of that mixed integer linear programming techniques were formulated that assigned a control action to each camera in order to maximize the overall detection probability. A real experimental setup was developed where the algorithms were validated.
Selected Publications:
Christos Kyrkou, Eftychios Christoforou, Stelios Timotheou, Theocharis Theocharides, Christos Panayiotou, Marios Polycarpou, “Optimizing the Detection Performance of Smart Camera Networks Through a Probabilistic Image-Based Model”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 5, pp. 1197-1211, May 2018.
publisher / zenodo / youtube
Doctoral Research - Hardware Accelerated Embedded Vision
My PhD thesis focused on the development of hardware accelerators on Field Programmable Gate Arrays (FPGAs) for machine learning algorithms used in computer vision such as Haar- casades and monolithical/cascade Support Vector Machines. Beyond the hardware acceleration my doctoral thesis focused on utilizing multiple cues such as edge and depth information to further accelerate the overall object detection process through data reduction. Specific applications of interest handled by the accelerator include face detection, pedestrian detection, and vehicle detection. Today modern processors utilize such architectures and core ideas to provide Real-Time on device Perception (RToP)
Selected Publications:
Christos Kyrkou, Christos-Savvas Bouganis, Theocharis Theocharides, Marios Polycarpou, "Embedded Hardware-Efficient Real-Time Classification with Cascade Support Vector Machines", IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, pp. 99-112, January 2016.
publisher / zenodo / youtube