Edge Vision and AI
Research Objectives
Development of efficient deep neural network architectures for real-time visual data analytics (image classification, object detection, and segmentation. Specialized deep architectures based on task-guided neural topologies, non-learnable structured features, attention mechanisms, and spatio-temporal data. Emphasis on neural architecture search (NAS) to design adaptable and efficient models for low-resource systems.
Application of vision-based control for drones and autonomous systems using end-to-end learning approaches and Integration of vision algorithms with spatial analytics to enhance detection and tracking in dynamic settings.
Research on edge AI for resource-constrained environments, particularly drones and smart cameras,Ā deploying AI models on embedded systems, ensuring low latency and high efficiency through data-efficient processing, and model quantization and pruning.
Enhancing the robustness of AI for safety-critical systems, such as autonomous vehicles, by developing countermeasures against cybersecurity vulnerabilities in AI systems, including data faults, noise, and cyberattacks to ensure reliable and trustworthy results..
Development of machine learning benchmarks and datasets, and focus on open-source research.
Seamless integration and deployment of deep learning and visual analytics algorithms in world applicationsĀ
Disaster Management: Algorithms for detecting disaster-related events using aerial imagery and deep learning.
Transportation and Mobility: AI-driven traffic monitoring and infrastructure condition assessment using UAVs.
Smart Cities: Intelligent systems for improving urban infrastructure and public safety
Ā Efficient Deep Learning Algorithms (tinyDL)
Computer vision (CV) technologies have made impressive progress in recent years due to the advent of deep learning (DL) andĀ Convolutional Neural Networks (ConvNets), but often at the expense of increasingly complex models needing more and more computational and storage resources. ConvNets typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. Thus there is a need to developĀ small ConvNets that provide higher performance, smaller memory footprint, and faster development times. My research involves identifying and exploiting the trade-offs between computation, image resolution, and parameter count in order to develop the most efficient neural network models. Specifically, it aims at the development of light-weight and general purpose convolutional neural networks for various vision tasks. One particular research direction is to explore efficient operators (group point-wise and depth-wise dilated separable convolutions) to learn representations with fewer FLOPs and parameters.Ā
Selected Publications:
Christos Kyrkou, "Toward Efficient Convolutional Neural Networks with Structured Ternary Patterns", Transactions of Neural Networks and Learning Systems, 2024.
publisher / arxiv / zenodo / github / datasetChristos Kyrkou, George Plastiras, Stylianos Venieris, Theocharis Theocharides, Christos-Savvas Bouganis, "DroNet: Efficient convolutional neural network detector for real-time UAV applications," 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, pp. 967-972, March 2018.
publisher /zenodo /arxiv / github1 / github2Ā ĀChristos Kyrkou, āYOLOPeds: Efficient Single-Shot Pedestrian Detection for Smart Camera Applicationsā, IET Computer Vision, 2020, 14, (7), p. 417-425, DOI: 10.1049/iet-cvi.2019.0897
publisher / arxiv / zenodo / github / youtubeĀ
Visual Data Analysis and Detection
Object Detection is one of the most fundamental computer vision tasks; where the goal is to localize the object within an image. This task is especially crucial in areas where the object size is small and a large image needs to be processed. My research efforts in this area have been towards the development of efficient algorithms for image search to direct the detection process. Applications, in this area include unmanned aerial vehicles for traffic monitoring and analytics.
Selected Publications:
Kristina Telegraph, Christos Kyrkou, "Spatiotemporal Object Detection for Improved Aerial Vehicle Detection in Traffic Monitoring", in IEEE Transactions on Artificial Intelligence, 2024. doi: 10.1109/TAI.2024.3454566
publisher / zenodo / arxiv /dataset / githubRafael Makrigiorgis, Nicolas Hadjittoouli, Christos Kyrkou, Theocharis Theocharides, "AirCamRTM: Enhancing Vehicle Detection for Efficient Aerial Camera-based Road Traffic Monitoring",Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2119-2128, 2022.
publisher / zenodo / cvf / youtube1 / youtube2 / datasetAlexandros Kourris, Christos Kyrkou, Christos-Savvas .Bouganis, āInformed Region Selection for Efficient UAV-based Object Detectors: Altitude-aware Vehicle Detection with CyCAR Datasetā, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 51-58, 3rd place in the 4th IEEE UK&I Robotics and Automation Society (RAS) Conferenceās Poster Competition
publisher / zenodo / dataset / youtube
End-to-end learning approaches: Perception-based Control
Ā Smart camera systems in a wide spectrum of machine vision applications including video surveillance, autonomous driving, robots and drones, smart factory and health monitoring. By leveraging recent advances in deep learning through convolutional neural networks, we can not only enable advanced perception through efficient optimized ConvNets, but also enable the direct control of cameras for active vision without having to rely on hand-crafted pipelines that incorporate various aspects of detection-tracking-control.Ā
Selected Publications:
Charalambos Soteriou, Christos Kyrkou, Panayiotis Kolios, "Closing the Sim-to-Real Gap: Enhancing Autonomous Precision Landing of UAVs with Detection-Informed Deep Reinforcement Learning", Deep Learning Theory and Applications (DeLTA 2024), Dijon, France, vol. 2171, pp 176ā190, July 2024.
publisher / zenodo / dataset1 / dataset2 / videoChristos Kyrkou, āC^3Net: End-to-End deep learning for efficient real-time visual active camera controlā , Journal of Real-Time Image Processing vol. 18, no. 4, pp. 1421-1433, August 2021.Ā Special Issue on Artificial Intelligence and Machine Learning for Real-Time Image Processing) https://doi.org/10.1007/s11554-021-01077-z
publisher / arxiv / zenodo / youtubeChristos Kyrkou, "Imitation-Based Active Camera Control with Deep Convolutional Neural Network", IEEE International Conference on Image Processing Applications and Systems (IPAS), December 2020. Best Paper Awardš
publisher / zenodo / arxiv / youtubeĀ Ā
Strengthen the resilience of AI models: Robustifying automated vision systems against Attacks
State-of-the-art deep learning models used in computer vision are susceptible to adversarial attacks which seek small perturbations of the input causing large errors in the estimation by the perception modality. The use of AI/ML-based techniques in detection and possibly mitigation of dynamic cyber-attacks on the camera system/data in the context of automated vision systems is a promising area. We have developed a deep learning approach for detecting out of distribution data points and restoring them to a given normal state. The approach utilizes a deep convolutional autoencoder that simultaneously learns to produce an undistorted version of an image and also detect the presence of adversarial artifacts in an image.
Selected Publications:
AndreasĀ Papachristodoulou, Christos Kyrkou, Theocharis Theocharides, "DriveGuard: Robustification of Automated Driving Systems with Deep Spatio-Temporal Convolutional Autoencoder", IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, January 2021, pp. 107-116.
publisher / zenodo / arxiv / cvf / youtubeĀ Ā
Visual Understanding for Emergency Monitoring
Early detection of hazards and calamities (e.g., wildfire, collapsed building, flood) is of utmost importance to ensure the safety of citizens and fast response time in case of a disaster.Ā Towards this direction the problem of visual scene understanding for disaster classification is tackled through deep learning. A novel dataset called AIDER (Aerial Image Database for Emergency Response) is introduced and EmergencyNet small deep neural network is developed to classify aerial images and recognize disaster events. The small DNN can run on the processing platform of a UAV to improve autonomy and privacy.
Selected Publications:
Demetris Shianios, Panagiotis Kolios, Christos Kyrkou, "DiRecNetV2: A Transformer-Enhanced Network for Aerial Disaster Recognition", SN Computer Science, 5,770, 2024.
publisher / zenodo / arxiv / datasetĀ ĀDemetris Shianios, Christos Kyrkou, Panagiotis Kolios, "A Benchmark and Investigation of Deep-Learning-Based Techniques for Detecting Natural Disasters in Aerial Images", Computer Analysis of Images and Patterns, CAIP 2023, Lecture Notes in Computer Science, vol 14185, September, 2023. https://doi.org/10.1007/978-3-031-44240-7_24
publisherChristos Kyrkou and Theocharis Theocharides, "EmergencyNet: Efficient Aerial Image Classification for Drone-Based Emergency Monitoring Using Atrous Convolutional Feature Fusion," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 1687-1699, 2020, doi: 10.1109/JSTARS.2020.2969809.
publisher / zenodo / youtube / dataset / CVonline / github / code ocean / demoChristos Kyrkou, Theocharis Theocharides "Deep-Learning-Based Aerial Image Classification for Emergency Response Applications using Unmanned Aerial Vehicles", CVPR 3d International Workshop in Computer Vision for UAVs, Long Beach, CA, 16-20 June, 2019, pp. 517-525.
publisher / zenodo / arxiv / cvf / youtube / dataset
Seamless integration and deployment: Intelligent Multi-camera Video Surveillance
Cameras mounted on aerial and ground vehicles are becoming increasingly accessible in terms of cost and availability, leading to new forms of visual sensing. These mobile devices are significantly expanding the scope of video analytics beyond traditional static cameras by providing quicker and more effective means such as wide area monitoring for civil security and crowd analytics for large gathering and events. Combining stationary cameras with moving cameras enables new capabilities in video analytics, at the intersection of Internet of Things, Smart Cities, and sensing.
My initial postdoctoral research focused on networked smart cameras with on-board detection capabilities and how to develop optimization and collaboration algorithms that enable the overall improve performance of the system. A generic and flexible probabilistic camera detection model was formulated capable of capturing the detection behavior of the object detection modules running on smart cameras. On top of that mixed integer linear programming techniques were formulated that assigned a control action to each camera in order to maximize the overall detection probability. A real experimental setup was developed where the algorithms were validated.Ā
Selected Publications:
Christos Kyrkou, Eftychios Christoforou, Stelios Timotheou, Theocharis Theocharides, Christos Panayiotou, Marios Polycarpou, āOptimizing the Detection Performance of Smart Camera Networks Through a Probabilistic Image-Based Modelā, IEEE Transactions on Circuits and Systems for Video Technology,Ā vol. 28, no. 5, pp. 1197-1211, May 2018.
publisher / zenodo / youtube
Doctoral Research - Hardware Accelerated Embedded Vision
My PhD thesis focused on the development of hardware accelerators on Field Programmable Gate Arrays (FPGAs) for machine learning algorithms used in computer vision such as Haar- casades and monolithical/cascade Support Vector Machines. Beyond the hardware acceleration my doctoral thesis focused on utilizing multiple cues such as edge and depth information to further accelerate the overall object detection process through data reduction. Specific applications of interest handled by the accelerator include face detection, pedestrian detection, and vehicle detection. Today modern processors utilize such architectures and core ideas to provide Real-Time on device Perception (RToP)
Selected Publications:
Christos Kyrkou, Christos-Savvas Bouganis, Theocharis Theocharides, Marios Polycarpou, "Embedded Hardware-Efficient Real-Time Classification with Cascade Support Vector Machines", IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, pp. 99-112, January 2016.
publisher / zenodo / youtube