Module src.recognition.mphands
Classes
class MPHands (confidence_settings: Tuple[int, int] = (50, 50), max_hands: int = 1, static_image_mode: bool = False)-
Simple wrapper for mediapipe hands.
Args
min_det- Minimum confidence value ([0.0, 1.0]) for hand detection to be considered successful. See details in https://solutions.mediapipe.dev/hands#min_detection_confidence.
min_track- Minimum confidence value ([0.0, 1.0]) for the hand landmarks to be considered tracked successfully. See details in https://solutions.mediapipe.dev/hands#min_tracking_confidence.
static_image_mode- Whether to treat the input images as a batch of static and possibly unrelated images, or a video stream. See details in https://solutions.mediapipe.dev/hands#static_image_mode.
max_hands- Maximum number of hands to detect. See details in https://solutions.mediapipe.dev/hands#max_num_hands.
Methods
def get_landmarks(self, img: numpy.ndarray) ‑> List[mediapipe.framework.formats.landmark_pb2.NormalizedLandmarkList]-
Processes the image using the mediapipe hand model pipeline.
Args
img- An RGB image represented as a numpy ndarray.
Raises
RuntimeError- If the underlying mediapipe graph occurs any error.
ValueError- If the input image is not three channel RGB.
Returns
A list with length equaling the max_hands arg passed to constructor. Each element in the list represents a hand through a list of 21 hand landmarks. Each landmark is composed of normalized x, y and z coordinates. See details in https://google.github.io/mediapipe/solutions/hands#multi_hand_landmarks (Note that handedness (i.e. left or right hand) is not currently available through this wrapper.)