Public appearances by a leader of Vladimir Putin’s stature are scrutinized by the media industry and intelligence services alike. The tendency to dissect the Russian president’s appearances increased exponentially following the invasion of Ukraine in February 2022.
Various observers—primarily from the media sphere—have focused on certain aspects of his body language, highlighting alleged anomalies in the kinesics of his hands, which at times have appeared unusually static and rigid.
This is the case, first and foremost, of the conversation Putin held with then–Minister of Defense Sergey Shoigu on 21 April 2022 to discuss military developments on the Donbas front. In the widely circulated video of the meeting, his right hand “pinches” the edge of the desk, while the fingers and palm of his left remain resting on it, without interruption. The position of his neck and head relative to his shoulders appears tense: one gets the impression that Putin is almost intimidated, as if he—rather than Shoigu—were the one having to report.

The posture attracted substantial media coverage, and countless outlets hypothesized the onset of oncological or neurological conditions.
Similarly, in a speech in late November 2024, Putin addressed the West to threaten an international expansion of the conflict in the event of missile strikes on Russian territory. In the video, his hands appear motionless and clasped together, almost as if they had been edited in post-production.

In this case as well, some media outlets—such as Germany’s Bild—speculated about possible pathologies.
Lastly, Italian media outlet L’Indipendente, on 11 December, summarized various claims regarding Vladimir Putin’s (allegedly) precarious state of health, also citing recent statements by prominent Italian politicians.
The Kremlin has always denied these claims.
In this OSINT analysis, initiated in the course of 2024, I attempt to propose a comprehensive fact-check aimed at verifying—through facial recognition techniques and automated limb “detection”—whether it is possible to obtain historical quantitative data on the kinesics of Putin’s hands, with the goal of confirming or refuting their progressive “freezing.”
From a methodological standpoint, most of the data analysis was carried out by a Python script implemented using the DeepFace and MediaPipe libraries, as well as the ffmpeg video-processing module.
From 2014 through 2025 inclusive, 29 videos of varying length were downloaded from YouTube and Yandex, depicting Vladimir Putin giving interviews or speaking in public.
For each video, the following routine was carried out:
- downloading via the yt-dlp library;
- segmentation into frames with the aid of the ffmpeg software;
- extraction, from the resulting set of frames, of the images depicting a single facial profile—Putin’s—and both of his hands. Facial recognition was facilitated through the use of the DeepFace module, while the MediaPipe library was used to detect the presence of hands;
- cleaning the dataset of the inevitable (albeit statistically manageable) detection errors made by the libraries used;
- calculation, for each set of frames corresponding to each of the 29 original videos, of a Hands Motion Index (hereafter: HMI) suitable for estimating a measure of the “radial” mobility of both of Putin’s hands relative to a fixed point: the center of his face or, alternatively, the geometric center of the frame.
To compute the HMI, it was first necessary to determine the absolute position of Putin’s hands and face in each frame, storing the positional information in a .json file linked to each image:

The resulting dataset was used to compute the historical HMI time series between 2014 and 2025, and also to visualize a “point cloud” (a cloud of positions) graphically illustrating the relative position (with respect to the face) of Putin’s hands across different time segments (e.g., 2014–2019 vs. 2020–2025).
To generate an HMI that was as normalized and comparable as possible (videos are often of different dimensions, or affected by non-standardized media settings and by varying zoom levels and camera positions—both across different videos and across frames within the same video), the index was calculated as the average percentage variability of the distance of both hands from the center of the face. Accordingly:
- if the index is low, the hands remain at fairly constant distances from the face (relatively stable position → low mobility);
- if the index is high, the hands move frequently (greater variation relative to the average distance).
The analysis pipeline processed a total of 271,245 frames. The historical HMI time series is presented in the table below:

The bar chart shows that the HMI ranges from minimum values of 11 (year 2022) to maximum values of 31 (years 2017 and 2021). At first glance, the trend does not appear to support hypotheses concerning a progressive contraction of Vladimir Putin’s manual mobility; however, the picture changes markedly if a weighting factor based on the number of frames available for each year is introduced. Doing so reveals a slight and gradual reduction in mobility across several time windows:
| INTERVALS | HMI | FRAMES |
| 2014-2021 | 26,42 | 103094 |
| 2022-2025 | 20,30 | 168151 |
| 2014-2019 | 26,04 | 62923 |
| 2020-2025 | 21,59 | 208322 |
| 2014-2023 | 24,04 | 152461 |
| 2024-2025 | 20,80 | 118784 |
The minimum HMI values occur—consistently with the analysis’s underlying assumption—in the videos in which Putin displays an apparent kinetic deficit. This is, for example, the case of the February 2022 video in which the so-called “special military operation” against Ukraine was announced (HMI = 11; 27,220 frames analyzed).
Similarly, the highest values occur in a 2021 video recorded for the year-end press conference (HMI = 31; 22,376 frames analyzed).
The data aggregated by time windows would appear to confirm a reduction in hand mobility between the first and the second observed period.
Visualizing the data as a “point cloud”—that is, a “graphical map” showing, in two-dimensional space, the distribution of the coordinates of each hand relative to the face in two distinct periods (identified by red dots—the earlier period—and blue dots—the more recent period)—seems to support this hypothesis, albeit only within a small red area in the lower quadrant of the image, potentially corresponding to reduced movement in that region.
Time intervals 2014–2019 (HMI = 26.04) vs. 2020–2025 (HMI = 21.59):

Time intervals 2014-2021 (HMI = 26,42) vs 2022-2025 (HMI = 20,30):

Time intervals 2014-2023 (HMI = 24,04) vs 2024-2025 (HMI = 20,80)

In the third and final case, the contraction in kinesics appears to become more pronounced when observed over the decade 2014–2023 than over the two-year period 2024–2025.
These quantitative observations would seem to be contradicted by an empirical appraisal of Putin’s body language during the “marathon” interview given on 19 December 2025, in the course of which the head of the Kremlin appears to display a normal ease in gesturing.
Inevitably, this analysis is affected by biases stemming from several factors: differences in communicative context and setting (interviews, podium speeches, meetings, reading prepared texts, the presence of objects such as microphones or other devices that may constrain gesturing); variability in filming and directing conditions (distance from the camera, zoom, crop, angle, changes of shot, resolution, and compression); and the heterogeneity of the videos in terms of duration and overall quality, as well as the imperfect temporal comparability of the samples (the number of videos and sequences available per year and the possible over-representation of specific formats in certain periods).
Added to this are biases induced by the automated selection and extraction pipeline: reliance on DeepFace’s reliability in filtering frames that contain only the face; MediaPipe’s sensitivity to occlusions, blur, and non-frontal poses when stably detecting both hands; and the resulting imperfect selection of “admissible” frames. Further distortions may arise from the use of the center of the face as a reference point (jitter in the reference point and head movements that may be partially absorbed into the hand–face measure), from the approximations adopted for geometric normalization (for instance, normalization by the frame diagonal, which is not equivalent to normalization on a facial scale), and from statistical aggregation at the annual and macro-period level which, although robust (medians and weightings), may still be influenced by strong dependence on individual units in years with small sample sizes and by the presence of residual outliers despite manual cleaning.
Moreover, it should be noted that a reduction in gesticulation may be at least partly consistent with advancing age. The available scientific literature shows that, often, older and younger individuals perform—relative to the number of words spoken—a similar quantity of hand movements. However, some differences emerge when one examines what the hands are doing. With age, “iconic” or “representational” gestures decrease most—namely, those that visually illustrate the topic: mimicking the shape of an object, indicating its size, or reproducing a movement or trajectory with the hands. Compared with younger people, older adults make less use of this type of kinesics, which adds informational content to the spoken representation. By contrast, simpler gestures that are less tied to content—such as rhythmic gestures marking the cadence of a sentence—appear to persist even into later life (see Ozer-Goksun, 2020). In theory, this dynamic could explain the contraction in Putin’s manual movements detected through the frame analysis starting in 2014 (when the “Csar” was 62) compared with today (73).
In conclusion, the data analysis appears to confirm a slight reduction in the kinesics of Putin’s hand movements over the past few years, although many questions remain regarding the reliability of the method adopted. It would certainly benefit from fine-tuning based, at a minimum, on two main interventions:
- a significant expansion of the data base (videos) for each year;
- segmentation of each video into as many derived clips as there are different camera shots and zoom levels, in order to neutralize directing-related variables.
From a comparative perspective, a small control test may be of interest, carried out by replicating the same analysis presented so far on another “ageing leader,” namely U.S. President Donald Trump.
In this case, a total of only 53,065 frames extracted from videos dating to 2015–2020 and 2024–2025 were examined. The availability of valid material for the occupant of the White House proves sparser than for his Moscow counterpart, given that U.S. productions tend to frame primarily the subjects’ faces, or in any case the upper torso, cutting the hands out of the shot. Moreover, the use of lecterns and podiums is frequent, which statically constrains hand position and makes computation of the HMI more complex.
With the exception of the years 2021–2023, during which the White House was “occupied” by predecessor Joe Biden (reducing Trump’s exposure to contexts compatible with the acquisition of useful data), the HMI trend likewise appears similarly fluctuating.

At the same time, the aggregated and weighted HMI measure for the two periods 2015–2020 and 2024–2025 appears to indicate, for Trump as well, a slight contraction in kinesics.
- Years 2015/2020 – HMI: 11.9
- Years 2024/2025 – HMI: 8.1
Similarly, the trend appears to be partially confirmed once the “point cloud” is plotted.

Naturally, the same cautionary considerations expressed earlier apply in this case as well.

