Large Scale Perceptual Summaries of Visual Information

GOAL

With the growing popularity of cameras, both stationary and in mobile devices, numerous opportunities are opening for applications based on visual analysis. A research team led by Prof. Ayellet Tal (Technion) investigates large scale perceptual summaries of visual information, and includes two research sub-projects:
Summarization of surveillance video
Surveillance video occupies most of the storage and transmission of data. But less than 1% of surveillance video is ever watched. Still, surveillance video is being compressed and transmitted using the standard methods. With the understanding that we do not need to support pleasant viewing by humans of surveillance video, we can examine some components in the lifecycle of a video, and obtain a temporally compact summary of features important for machine understanding of surveillance video.
Novelty detection in video and image databases
Novelty detection is an emerging area of research in computer vision. Novelty detection can help to reduce surveillance data by eliminating routine objects and events and handling only “unusual” objects are of interest. Novelty detection requires the development of machine learning tools suitable to the learning of novel video events, where normal video events are learned in an unsupervised manner from training video streams. The project aims to develop new novelty detection methods which exploit the visual saliency of images, and are suitable to handle large collection of images or video.
   
Figure 3: 24 Hours in 20 seconds
STATUS

TBD
PEOPLE
Prof. Ayellet Tal, Technion EE
Prof. Dani Lischinski, HUJI CSE
Prof. Shmuel Peleg, HUJI CSE
Prof. Daphna Weinshall, HUJI CSE
Prof. Michael Werman, HUJI CSE
Prof. Lihi Zelnik-Manor, Technion EE
Students
Dr. Chetan Arora (HUJI )
Gil Ben_Arzi (HUJI )
Elhanan Elboher (HUJI )
Yedid Hoshen (HUJI )
George Leifmann (Technion)
Ran Margolin (Technion)
Yair Poleg (HUJI )
Yehezkel Reshef (HUJI )
Dmitry Rudoy (Technion)
Alumni students
Dmitri Hanukaev (HUJI )
Gal Levi (HUJI )
Elizabeth Shtrom (Technion)
PUBLICATIONS
  1. N. Efrat, D. Glasner, A. Apartsin, B. Nadler, A. Levin. “Accurate Blur Models vs. Image Priors in Single Image Super-Resolution”, International Conference on Computer Vision.
  2. A. Levin, B. Nadler, F. Durand, W. Freeman, “Patch Complexity, Finite Pixel Correlations and Optimal Denoising” ,European Conference Computer Vision​

Ayellet Tal ➭

  1. R. Margolin, L. Zelnik-Manor, and A. Tal, Saliency For Image Manipulation, The Visual Computer, 2012
  2. R. Margolin, A. Tal, and L. Zelnik-Manor, “What Makes a Patch Distinct?”, CVPR, 2013.
  3. lizabeth Shtrom, George Leifman and Ayellet Tal. “Saliency Detection in Large Point Sets”, ICCV 2013
  4. R. Margolin, L. Zelnik-Manor, and A. Tal, “How to Evaluate Foreground Maps?”, CVPR 2014, acce
  5. R. Margolin, L. Zelnik-Manor and A. Tal,  “Saliency For Image Manipulation”, The visual computer, June 2012

Dani Lischinski ➭

  1. Y. Inger, Z.Farbman, and D. Lischinski, “Locally Adaptive Products for All-Frequency Relighting”, Eurographics 2013
  2. Y.HaCohen, E. Shechtman, and D. Lischinski, “Deblurring by Example using sense correspondence”. Proc. ICCV 2013, Dec. 2013
  3. Veronique Prinet, Dani Lischinski, and Michael Werman, “Illuminant Chromaticity from Image Sequences”Proc. ICCV 2013, Dec. 2013.
  4. Yoav HaCohen, Eli Shechtman, Dani Lishchinski , “Deblurring by Example using Dense Correspondence” In Proc. ICCV 2013.

Shmuel Peleg ➭

  1. Y. Hoshen, C. Arora, Y. Poleg, and S. Peleg, “Efficient Representation of Distributions for Background Subtraction”, IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS’13)
  2. Y. Poleg, C. Arora, and S. Peleg, “Temporal Segmentation of Egocentric Videos”, To appear in CVPR’14

Daphna Weinshall ➭

  1. Daphna Weinshall, Dmitri Hanukaev and Gal Levi, “LDA Topic Model with Soft Assignment of Descriptors to Words”,  In Proceedings of 30th International Conference on Machine Learning (ICML), Atlanta GA, June 2013.
  2. U. Shalit, D. Weinshall, and G. Chechik. “Modeling Musical Influence with Topic Models”, In International Conference on Machine Learning (ICML), 2013
  3. A. Golbert, D. Weinshall, “Object Detection in Multi-view 3D Reconstruction Using Semantic and Geometric Context”. SPRS Annals Volume II-3/W3 2013, CMRT13 – City Models, Roads and Traffic, pp. 97-102, 2013

Michael Werman ➭

  1. E. Elboher and M. Werman, “Efficient and Accurate Gaussian Image Filtering Using Running Sums”, SoCPar 2012
  2. E. Elboher, M. Werman, and Y. Hel-Or. “The Generalized Laplacian Distance and its Applications for Visual Matching.” CVPR 2013.
  3. Veronique Prinet, Dani Lischinski, and Michael Werman, “Illuminant Chromaticity from Image Sequences”Proc. ICCV 2013, Dec. 2013.

Lihi Zelnik-Manor ➭

  1. R. Margolin, L. Zelnik-Manor, and A. Tal, Saliency For Image Manipulation, The Visual Computer, 2012
  2. R. Margolin, A. Tal, and L. Zelnik-Manor, “What Makes a Patch Distinct?”, CVPR, 2013.
  3. D. Rudoy, D.B Goldman, E. Shechtman and L. Zelnik-Manor, “Learning video saliency from human gaze using candidate selection”, CVPR, 2013.
  4. R. Margolin, L. Zelnik-Manor, and A. Tal, “How to Evaluate Foreground Maps?”, CVPR 2014, acce
  5. R. Margolin, L. Zelnik-Manor and A. Tal,  “Saliency For Image Manipulation”, The visual computer, June 2012
  6. Alexandra Gilinsky and Lihi Zelnik-Manor, SIFTpack: a Compact Representation for Efficient SIFT matching”, ICCV’13