Saturday, December 2, 2023

TREYESCAN: configuration of an eye tracking test for the measurement of compensatory eye movements in patients with visual field defects – Scientific Reports

Experiment setup, recording device and analysis software

The present study employed an experimental setup and methodology that we described in a previously published work18. The experiment took place at Amsterdam University Medical Centers (UMC), location VU University Medical Center (VUmc), and utilized three 24-inch HP EliteDisplay E243i monitors with a resolution of 1920 × 1200 pixels and a pixel density of 94.34 ppi. Participants sat in a car seat positioned 65 cm away from the screens to achieve a 100° field of view, while the table could be adjusted in height to ensure that the eyes were centered on the middle of the screen (Fig. 1). Head movements were permitted in all directions during the experiment.

Figure 1

TREYESCAN setup18. (A) Picture of setup. Participants were seated in front of three display monitors whilst wearing the Pupil Core eye tracker. The individual depicted in the image is not a study participant and has provided consent for the publication of the image. (B) Schematic overview of the set-up (view from above). The participant is located 65 cm in front of the central display monitor. The examiner is located on the right of the participant and can guide the experiment from the host monitor. On the MacBook display stability of the signal and performance of the participant could be monitored.

The participants’ eye movements were recorded by a head-mounted eye tracking device (Pupil Labs Core glasses, received October 2021, Pupil Labs, Berlin, Germany)19. The Surface Tracker plugin by Pupil Labs was used to define the surface area of the display with apriltags, which are QR-like markers20. This allows the gaze to be mapped on the screen surface, thus obtaining screen-based gaze coordinates with a head-mounted eye tracker.

The data were exported using Pupil Labs Player v3.3.0, and dynamic area of interest analyses were performed using the TREYESCAN Toolkit software18,21, written in Python 3.8.322 using NumPy23, Pandas24, OpenCV25, Matplotlib26, and SciPy27.

Traffic scenes

Two different driving routes, each with an approximate duration of 30 min, were collaboratively designed with the CBR (Dutch driving test organization), the central office for driver’s license administration in The Netherlands. These routes were specifically selected in the city of Amsterdam, characterized by its urban environment with narrow streets, thereby presenting complex and challenging traffic scenarios. Each route was driven five times.

Video footage was captured using a Sony A7III camera equipped with a Laowa Zero-D ultra-wide field 12 mm f/2.8 lens (angle of view: 121.96°; minimal distortion) from within a moving Toyota Prius II vehicle. The camera was mounted centrally behind the windshield, and a black piece of felt was placed on the dashboard to prevent reflections (Fig. 2A). The footage was captured in 4 K resolution (3840 × 2160) at 25 frames per second. The video clips were subsequently expanded and cropped to a resolution of 5760 × 1200 using Adobe Premiere Pro (Adobe Inc, San Jose, CA, USA) to fit the three-screen setup (Fig. 2B).

Figure 2
figure 2

Recording and editing of traffic scenes. (A) The car and camera setup. (B) Indication of crop to 5760 × 1200. The full width of the video was used. No information about the traffic scene was lost by cropping the video’s height to facilitate screen fitting.

The video content was analyzed to identify relevant traffic scenarios, that require the driver’s attention, while also ensuring that objects were included from diverse directions in the peripheral visual field. To minimize memory recollection effects, for each traffic scene, a duplicate scene was chosen that contained the same part of the route driven, but with different objects. Finally, a total of 42 traffic scenes were selected and compiled into 6 videos of approximately 8 min each.


In order to determine which objects were relevant to the analysis, a panel of 10 raters comprising 5 experts on practical fitness to drive of the CBR and 5 experienced drivers with knowledge of conducting traffic-related research and/or patient assessment (co-authors RvN, GvR, BM, JK, and LvR), were presented with the 6 videos. The panel had 42.5 (median, IQR [34.0–43.8]) years of driving experience. A web application was developed to enable raters to individually indicate dynamic AOIs in the video by mouse clicks (the source code for this application is available as open-source code on Github28: The application included options to view in different speed settings, play/pause and rewind the videos. The raters had to choose between two categories, Must-Be-Seen and May-Be-Seen objects. The former included objects that necessitate active or passive consideration by the driver, such as reducing speed, changing lanes and delaying acceleration, while the latter included objects that are relevant to be seen by a driver but do not require a change in driving behavior, such as pedestrians on the sidewalk and oncoming traffic in the other lane.

Figure 3 presents a heatmap of the distribution of all mouse clicks of the 10 raters for the 6 videos. Notably, most clicks were made in the central part of the screen. The results of the 10 raters were manually coded for each scene in the six videos by observing the videos with an overlay of circle animations representing the location of the clicks. An object was included as a relevant AOI if it was rated by 5 or more raters. If an object was chosen by 4 raters, a panel reviewed its inclusion. The majority of raters determined whether an object was Must-Be-Seen or May-Be-Seen. Traffic lights were included in all cases. A new AOI was incorporated when there was a change in the color of the traffic lights or when the brake lights of a car were activated.

Figure 3
figure 3

Heatmap of the distribution of all mouse clicks made by the 10 raters for the 6 videos. Dimensions of heatmap represent the dimensions of the video screen. Histograms of distributions of amount of clicks are shown for the x- and y-axis.

The Dynamic allocation of AOIs was performed using tools that were previously described in our published work18. These tools enabled us to determine the location of each AOI for every frame of the video, tracking them from their initial appearance to their disappearance. The TREYESCAN software allows for the addition of margins around the areas of interests, in order to compensate for the inaccuracy of the eye tracker. A margin size of 2.5° was chosen for the analysis.

Measurements in normal-sighted individuals

The recruitment of normally-sighted participants was done through snowball sampling. Eligibility criteria required participants to have no history of ophthalmic comorbidities and at least 5 years of driving experience. The participants performed an Esterman visual field test29 and a visual acuity measurement using the ETDRS chart30. Additionally, a customized central visual field test was programmed using the Humphrey Field Analyzer II (HFA). The test employed a three-zone strategy, with an age-corrected test mode according to HFA routines, a III white stimulus, 80 points, and a point spacing of 2º. Its purpose was to screen the central 10° binocularly for any potential visual field defects, as the Esterman visual field test does not measure this area. Only participants with a minimal visual acuity of 0.0 LogMAR (20/20 Snellen or 1.0 decimal notation) and no defects on the visual field tests were included in the study. Participants with multifocal prescription glasses were excluded.

During the experiment, the participants’ eye movements were recorded while they viewed the six videos with traffic scenarios. Following the viewing of three videos, a scheduled intermission was provided, allowing participants to take a break. During this intermission, participants were asked if they had any feedback on the setup, videos, or their experience during the test. The researcher conducting the experiment documented any feedback provided verbally. After all videos had been viewed, the same question was asked to ascertain if participants had supplementary insights or comments contributing to their previously offered feedback. To isolate the visual aspect of the task, the videos were presented without accompanying traffic sounds. The participants were instructed to imagine themselves as the driver of the vehicle and to press a button whenever they identified an object that required action, such as reducing speed or changing lanes. This instruction was given to avoid the interference caused by providing a verbal commentary during concurrent hazard perception tasks31. Before each video, a nine-point screen calibration was conducted, followed by a 12-point validation routine, which both extended across the entire screen18. The Pupil Labs software then generates a value for the accuracy and precision of the calibration. During the calibration process, we set a target accuracy threshold of less than 2.5º, in line with the specifications provided in the Pupil Labs documentation, which indicates that 3D Gaze Mapping should achieve an accuracy range of 1.5º to 2.5º. When the initial calibration did not meet this accuracy requirement, two additional calibration attempts were performed. After the third calibration, the accuracy value obtained was accepted, also when it exceeded 2.5º.

Following a four-week interval, the participants underwent a second viewing of the videos in a different order to gather data from two separate instances. A randomization tool was employed to determine the viewing order, which could be presented as either 123–456 or 456–123. During the second measurement session, the participant was presented with the opposite viewing order from their initial session. The use of this time interval, coupled with the inclusion of duplicate scenes, was intended to reduce potential recall bias.

All procedures involved in collecting and analyzing data from human subjects were conducted in accordance with the ethical standards of the 1964 Declaration of Helsinki and its subsequent amendments. The experimental protocol was officially approved by the Medical Ethical Committee of Amsterdam UMC, location VUmc (METC-Number: 2020.475). All participants provided informed consent prior to participation. The individual depicted in Fig. 1A was not a study participant and has provided informed consent for the publication of the image in an online open access publication.

Determining the contents of TREYESCAN

The included participants viewed 6 videos of approximately 8 min on two measurement sessions. For the next study stage, a case–control study with glaucoma patients, the objective is to create a test consisting of the most effective scenes. To determine which videos should be included in the final test, a stepwise approach was used, based on the feedback of the included participants and the traffic scene characteristics.

Determining the relevant AOIs to include in TREYESCAN

The data obtained from this pilot study will be used to determine the most relevant AOIs for inclusion in the subsequent phase of analysis in the case–control study. We hypothesize that the included normally sighted participants with considerable driving experience, have a good understanding of where their visual focus should lie within the traffic scenes. Consequently, it would be inappropriate to evaluate the patient population in later stages using AOIs that are not looked at by normally sighted individuals. To address this, we propose that an AOI should be included if it has been viewed, arbitrarily, by at least half of the participants.

On the other hand, when conducting AOI analyses in eye tracking, it is important to consider entry time—the duration from the onset of the stimulus until the AOI is first viewed13. We observed that certain AOIs do not require a deliberate saccade movement to be immediately “hit”, due to a (close to) central location or added margins. Consequently, it would be unfair to compare entry times for objects in the case–control study that are consistently and falsely “hit” by most participants in this pilot study. To address this issue, we decided to exclude AOIs that start in the central 10º of the screen. Our findings indicate that AOIs originating within the central 10º are predominantly objects in the oncoming lane, which start at a small size and therefore obtaining large margins. Consequently, these AOIs do not capture deliberate gaze behavior since they are immediately and falsely “hit” when looking at the road ahead.

Similarly, we have also encountered other objects starting within the central screen of the setup that exhibit similar characteristics. For the object appearing between the central 10º and 30º, we have introduced an additional variable, namely entry time. We identified AOIs with a short entry time, taking into account previous research indicating that a natural saccade latency of 120–150 ms occurs with everyday stimuli and environments. Hence, we set a threshold of 120 ms for the median entry time of all participants, enabling us to select only deliberate saccades to these AOIs13,32,33. The use of median entry time was necessary due to the skewed distribution of entry times. If an object appeared within the central 10º to 30º and had an entry time shorter than 120 ms it would be marked for exclusion in that measurement session (T1 or T2).

In summary, the exclusion of an AOI was determined based on the following criteria for each for each measurement session separately: (1) if it was observed by less than half of the participants, (2) if its appearance was within the central 10º, and (3) if its appearance was within the central 10°–30° and the median entry time was shorter than 120 ms. If any of these criteria were met, the AOI was considered for exclusion in that particular measurement session. Only when an AOI satisfied the exclusion criteria in both measurement sessions, it was generally marked for exclusion.

Source link

Related Articles

Leave a Reply

Stay Connected

- Advertisement -spot_img

Latest Articles

%d bloggers like this: