close

How has Face Recognition Advanced since the 1960s?

All

Face Recognition

How has Face Recognition Advanced since the 1960s?

Sabina Pokhrel / November 5, 2020

The concept of face recognition is not new, nor is its implementation. The evolution of face recognition is fascinating and using computers to recognize faces has been dated back to the 1960s.

Yes, that’s correct, I said the 1960s.

From 1964 to 1966 Woodrow W. Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965).

Since then, face recognition has gone through many evolutions. In the early 1990s, holistic approaches dominated the facial recognition community. During this period, low-dimensional features of facial images were derived using the EigenFace approach.

Image shows top 36 EigenFaces

In the early 2000s, local-feature-based face recognition was introduced where discriminate features were extracted using handcrafted filters such as Gabor and LBP.

Convolution results of a face image with two Gabor filters

In the early 2010s, learning-based local descriptors were introduced in which local filters and encoders were learned.

Face recognition evolution timeline

The year 2014 was marked as an important year in the evolution of facial recognition as it reshaped the research landscape of this technology. It was the year when Facebook’s DeepFace model’s accuracy (97.35%) on the LFW benchmark dataset approached human performance (97.53%) for the first time. Just three years after this breakthrough, the accuracy of face recognition reached 99.80%.

So, what changed in all these years?

All approaches up until 2014 used one- or two-layer representations such as filtering, histogram of feature codes, or distribution of the dictionary atoms to recognize the human face.

Deep learning-based models, however, used a cascade of multiple layers for feature extraction and transformation. The lower layers learn low-level features similar to Gabor and SIFT whereas the higher layers learn higher-level abstractions. That means, in the current evolution of facial recognition, what different face recognition approaches could do individually back then, can now be done using just one deep-learning-based approach.

Feature vector that represents face in different layers if deep learning network

Spread the love

About the Author

    Sabina is an AI Specialist and Machine Learning Engineer. She is a Writer and a former Editorial Associate at Towards Data Science.

Trusted by

Xailient’s commercial partners

Press releases

September 27, 2024

by Newsdesk Fri 27 Sep 2024 at 05:39 Global industry supplier Konami Gaming is set to unveil new technology at Global Gaming Expo (G2E) in Las Vegas next month that brings its player facial recognition solution for Electronic Game Machines to table games. The expanded offering, in partnership with Xailient, follows the launch of SYNK Vision […]

July 13, 2024

NEWS PROVIDED BY Xailient Inc  Nov 05, 2023, 22:24 ET Konami Gaming and Xailient partnering to replace magnetic player tracking cards with facial recognition technology at slots and tables for the casino industry LAS VEGAS, Nov. 3, 2023 /PRNewswire/ — Konami Gaming, Inc. and Xailient Inc. announced a strategic partnership to introduce SYNK Vision™ to the casino industry. This revolutionary collaboration […]

OnEdge Newsletter

A weekly newsletter with the best hand-picked resources about Edge AI and Computer Vision

OnEdge is a free weekly newsletter that keeps you ahead of the curve on low-powered Edge devices and computer vision AI.

 

You’ll get insights and resources into:

  • Edge computing use cases.
  • Market trends in Edge computing.
  • Computer Vision AI at the Edge.
  • Machine learning at the Edge.
Cookie Policy