‘m attempting to implement anti-spoofing in iOS app utilizing iphone true depth entrance digital camera. I’ve checked the next questions nonetheless cannot discover a correct working resolution
Utilizing iPhone TrueDepth sensor to detect an actual face vs photograph?
How one can decide if a face is flat like a picture utilizing ARKit?
I skilled a coreML mannequin utilizing 22000 depth human face photographs and 22000 non-human face(objects,meals and so forth) photographs. The accuracy of the mannequin may be very much less.
When testing out with flat 2nd photographs proven on a smartphone display I discovered that I get depth map even for flat 2D photographs like this. Despite the fact that the picture is flat how does it give the depth map for the individual proven within the flat 2D image so the mannequin thinks that it’s a actual face as an alternative of a spoofed one. Like this
I carried out depth seize by following this documentation and I made positive that I get depth map as an alternative of disparity map
The following method I attempted was to manually calculate the variance within the pixels of depth map as talked about on this reply however this factor was additionally not correct.
My subsequent method was to make use of NCNN framework to implement anti-spoofing through the use of the mannequin used within the Mini-vision android anti-spoofing pattern. I rewrote their library in iOS through the use of the target C++ wrapper for C++ because the pattern was solely out there for android app. And I examined by feeding 80×80 UI-Picture in a open cv matrix format it is accurracy is lower than the android one.
How can I remedy this drawback.