Journal of Real-Time Image Processing Special Issue on “Real-Time Image and Video Processing in Mobile Embedded Systems”
Recent advancements in the field of mobile devices, such as smartphones, and smart wearable gadgets such as smart watches have changed the way we connect with the world around us. Users rely on smart devices wish to maintain an always-on access to information, personal and social networks, etc. With the rapid advancements in embedded mobile sensors, such as cameras, GPS, digital compass, and proximity sensors, a variety of data is recorded, thus enabling new sensing applications across diverse research domains, inlcuding mobile information retrieval, mobile media analysis, mobile computer vision, mobile social networks, mobile human-computer interaction, mobile gaming, mobile entertainment, mobile healthcare, and mobile learning. Irrespective of the application fields, the majority of the challenges and issues brought by emerging mobile imaging and video still lie ahead and many research queries remain to be answered. For instance, seamless user knowledge has been recognized as one of the major factors in designing mobile image and video processing applications for multiple form factors. However, its provision is a challenging task and requires effective integration of mobile sensors and multidisciplinary research such as visual content adaptation, and user behavior analysis. Fetching the information of high-quality video and images through mobile devices increases the battery consumption and processing power.
Therefore, most of the intense battery required applications are uploaded to the cloud servers to save energy and enhance battery lifetimes for mobile users. Keeping the fact of the immense popularity of the mobile phones and high data required applications such as video streaming and high-quality gadgets, the improvement and enhancement in video and image processing in mobile embedded systems has been seen over the last decade. For instance, MPEG has designed a standard for mobile visual search (MPEG Compact Description for Visual Search). Also, a mobile application ‘Instagram’ has attracted 200 million monthly active users till this date. Some of the other wearable devices, such as Apple Watch or Google Glass, is also a type of personal digital assistants that have shown their potential to be the nextgeneration mobile media and embedded services. Keeping the face of the mobile embedded devices for highquality streaming and the technology development, this special issue is designed to examine an open study and investigation of the current development and future trends and directions of the real-time aspects in mobile video and image video processing in mobile embedded systems. In general, the audience of JRTIP focus has been researchers from both academia and industry, working in the multidisciplinary field of image and video processing.
JRTIP bridges the gap between the theory and practice of image processing. It covers all relevant aspects of real-time image processing systems and algorithms for industrial, medical, consumer electronics, portable and embedded device applications. It presents practical, low-cost, and real-time architectures for image processing systems as well as tools, simulation and modeling for real-time image processing algorithms and their implementations. This special issue on ‘Real-Time Image and Video Processing in Mobile Embedded Systems’ is intended to present the current state-of-the-art in the field of mobile embedded systems applications using real-time image and video processing. Contributions are solicited to this special issue by submitting original and unpublished papers that illustrate research results, projects, surveying works and industrial experiences that are dealing with theory and applications within the theme of Real-Time Image and Video Processing in Mobile Embedded Systems. Authors are encouraged to submit contributions in any of the following or related areas for real-time processing and parallel computing:
- Real-time image and video processing applications taking advantage of mobile embedded devices
- Next Generation real-time mobile image and video coding
- Real-time mobile visual search
- Real-time single image super resolutions in mobile embedded systems
- Real-time processing of large-scale image and video processing in mobile systems
- Real-time image Scanning, display, and printing in mobile systems
- Real-time 3D scene reconstruction and occlusion handling in mobile systems
- Real-time probabilistic statistical models for local semantic representation in mobile systems
- Real-time context model for global semantic representation in mobile systems
- Real-time algorithms for large-scale social behavior analysis in mobile systems
- Real-time 2D/3D computer vision for mobile embedded devices
- Real-time visual content analysis, representation, and understanding with mobile and wearable devices
- Real-time computational photography on mobile embedded devices
- Real-time de-mosaicking and denoising methods in mobile systems
- Other topics related to real-time mobile image and video processing
- Ernesto Damiani, Khalifa University, UAE, firstname.lastname@example.org
- Marco Anisetti, Università degli Studi di Milano, Italy, email@example.com
- Awais Ahmad, Kyungpook National University, Korea, firstname.lastname@example.org
- Gwanggil Jeon, Incheon National University, Korea, email@example.com
Authors from academia and industry working in the above research areas are invited to submit original manuscripts that have not been published and are not currently under review by other journals or conferences. All potential authors are requested to volunteer as reviewers in the peer-review process for manuscripts submitted for this special issue. All submitted manuscripts being eligible with respect to the scope of JRTIP and the focus of this special issue will be peer-reviewed based on their originality, presentation, and novelty, as well as their suitability to the special issue.
Previously published conference papers should be clearly stated by the authors and an explanation should be provided how such papers have been extended to be considered for this special issue. Authors are requested to follow the author guidelines at http://www.springer.com/11554, and include the name of the call within the submission letter.
Prior to sending full paper submissions, it is highly recommended to query the appropriateness of submissions with a 100-200 word abstract by contacting the guest editor with the following contact information: Gwanggil Jeon, Incheon National University Korea, firstname.lastname@example.org