open access

Abstract

Remote testing has become a desireable option as it helps reduce participant burden, can be more convenient and enables longitudinal data collection to track symptom recovery. Recently, advances in testing have enabled researchers to test somatosensory processing and brain function. Using tactile testing modalities such as vibrotactile stimulation to the fingertips can provide information about cortical inhibition, for example, without the need for invasive testing procedures. In the current manuscript, we present our initial experience for ‘at home’ tactile testing. We demonstrate 1) it is possible to develop an ‘at home’ testing battery with multiple tasks that is comparable to ‘in lab’ testing; and 2) it is feasible to collect this data remotely and repeatedly to monitor longitudinal changes.

Participants included pediatric concussion patients and orthopedic injury (OI) controls, 8-18 years of age at time of participation, and were recruited ~10 days after injury. Testing was conducted on a 2-digit vibrotactile stimulator hand-held device and was based on previously used protocols. Stimulation was delivered to the left index and middle finger. Data quality of tasks was visually inspected to ensure data followed a pattern of converging values of thresholds over time. A total of 19 participants were recruited in this study; 11 concussion and 8 OI. Participants in the concussion group were 12.8 ± 2.2 years old (36.4% female) and participants in the OI group were 11.6 ± 2.5 years old (57.1% female) at the time of injury. Results from paired sample t-tests comparing task performance did not detect significant differences between the data collected from the home session and at the lab visit for the concussion group.

Our results demonstrate that vibrotactile sensory testing can provide a non-invasive, objective measure of central nervous system functioning without relying on subjective questionnaires. This work demonstates it is possible to perform this testing remotely. Our data with children and adolescents demonstrates they are capable of completing these tasks at home; we therefore expect this at home testing protocol could easily be administered in other populations.

Introduction

There is an increasing desire to evaluate neurological disorders in remote settings (i.e., not in the lab or clinic) for convenience and to minimize burden for participants. For example, it may be desirable to perform assessments multiple times a week to track recovery from a concussion and it is not practical to request participants to visit a research lab with this frequency. Alternatively, it may not be feasible to schedule visits to coincide with symptoms, for example to evaluate pain symptoms during a migraine attack, or it may be not be feasible for participants to come in to laboraries for reasons such as transit or travel. Additionally, as we have learned with COVID-19, it can be undesirable to have participants come to large centres and many require remote alternatives. For these reasons, we seek to develop a testing paradigm to examine and quantify brain function that participants can perform independently and reliably.

Research using tactile perceptions can provide insight to somatosensory processing and can also inform brain function more broadly. This approach has been used to advance our knowledge of several disorders including developmental disorders such as autism [1] and attention deficit hyperactivity disorder (ADHD) [2], neurological conditions such as concussion, [3-12] psychiatric disorders such as obsessive compulsive disorder [13] and chronic pain [14]. While somatosensory processing itself may be altered in these conditions, these studies have also provided insight into cortical inhibition more broadly.

For comprehensive data, one approach is to use a battery of tactile tests to quantify specific features of sensory perception. Our approach is to use vibrotactile stimulation delivered to the fingertips; the participant responds either as a reaction time test (e.g. respond as quickly as possible after you feel a stimulus), a detection test (e.g. respond when you feel a stimulus) or a discrimination test (e.g., which of two stimuli felt bigger). These testing batteries can investigate multiple features of touch such as stimuli amplitude or frequency detection or discrimination. Alternatively, tasks such as temporal ordering or duration discrimination can be used to interrogate more complex processing and integration. The addition of confounds or priming stimuli can be used to investigate different aspects of cortical inhibition including: (a) lateral inhibition, the shaping of neuronal responses from neighbouring neurons, (b) feedforward inhibition, that is the temporal integration of responses, and (c) surround suppression that modulates the perceived contrast between stimuli [15].

Importantly, tactile testing is not limited to adults; we have shown children as young as 3 years old are able to perform testing with customized testing batteries [16]. Furthermore, while school-aged children show similarities with adults, [17] there are also developmental trajectories in the quantified perception thresholds [16]. Recently, Mikkelsen and colleagues (2020) [18] have demonstrated test-retest reliability for these tasks, providing increased confidence in measures as well as supporting the use of these testing batteries longitudinally to monitor progression of disease or recovery following injury or interventions. For example, in the context of mild traumatic brain injury (mTBI), this would enable researchers to measure changes in central nervous system (CNS) functioning post-concussion without requiring a baseline (pre-injury) measure [4,12].

In the current manuscript, we present our initial experience for ‘at home’ tactile testing. We demonstrate 1) it is possible to develop an ‘at home’ testing battery with multiple tasks that is comparable to ‘in lab’ testing; and 2) it is feasible to collect this data remotely and repeatedly to monitor longitudinal changes. Importantly, we provide advice on design and implementation for future study success. For this feasibility study, we have a sample of adolescents with mTBI or with orthopedic injury (OI), though this study is not powered to detect differences between these groups.

Materials and Methods

Participants in this study were recruited from a larger prospective longitudinal study (Advancing Concussion Assessment in Pediatrics) [19]. Participants included pediatric concussion patients and OI controls, 8-18 years of age at time of participation, and were recruited ~10 days after injury.

General Procedure

Testing was conducted on a 2-digit vibrotactile stimulator hand-held device (Figure 1a; Brain Gauge, Cortical Metrics, Chapel Hill, NC, USA) and was based on previously used protocols [2,10]. Stimulation was delivered to the left index and middle finger and participants were asked to use the wired mouse with their right hand to respond to prompts on the screen. A wired mouse was used to minimize timing lags for responses and were provided for home use if needed Figure 1. Participants had to correctly answer three practice trials before each task proceeded to ensure they understood the instructions. Feedback was provided for the practice trials but not for the task trials. Data quality of tasks was visually inspected to ensure data followed a pattern of converging values of thresholds over time. Thresholds that do not converge may indicate a lack of compliance or participant fatigue.

Figure 1.Vibrotactile testing items for home testing.(a) Hand-held device stimulator sent home with participants. (b) Example of testing setup. Participants were asked to place thir left hand on the stimulator device with their middle finger and index fingers resting on the orange buttons. They were asked to record their responses using the wired mouse in their right hand.

Tactile tasks

Simple Reaction Time 1 (SRT1)

Participants were told to press the button of a wired mouse as soon as they felt a tap from the stimulator. There were a total of 10 trials in this task, with an intertrial interval (ITI) of 4000-7000 ms. Each stimulus had a frequency of 25 Hz, amplitude of 300µm, and was 40 ms duration. Reaction time was calculated as the mean of the remaining six trials after removing the fastest two and slowest two trials. Reaction time variability was calculated as the variance across the same six trials as reaction time variability is indicative of attention and compliance. This task was repeated at the start and end of the session, differences between these two sessions can indicate fatigue.

Sequential Amplitude Discrimination (SeqAD)

Two stimuli of different amplitudes were delivered sequentially to each digit and participants were asked to select which finger felt the larger stimulus by using the mouse to select the appropriate button on the screen. There were a total of 20 trials, with an ITI of 5000 ms. Within each trial, the stimuli were delivered with an interstimulus interval (ISI) of 500 ms, frequency of 25 Hz, and duration of 500 ms. A reference amplitude of 400 µm was delivered to one finger, (the finger was varied over trials). The initial difference in amplitudes was 200 µm. This difference was increased or decreased by 20 µm according to a 1 up/1 down ladder for the first 10 trials (i.e., the comparison stimulus amplitude increased for incorrect answers and decreased for correct answers) and a 1 up/2 down ladder for the last 10 trials (i.e., two correct answers led to a decrease in the stimulus amplitude). The amplitude discrimination threshold was defined as the mean of the difference between the stimulus amplitude and the reference amplitude over the last five trials.

Simultaneous Amplitude Discrimination (SimAD)

This task is similar to the SeqAD but the two stimuli were delivered to the digits simultaneously. The detection threshold was determined as the mean difference over the last five trials.

Temporal Order Judgement (TOJ)

Sequential stimuli were delivered to each digit and participants selected which finger felt the first vibration by using the mouse to select the appropriate button on the screen. There were a total of 20 trials, and the delivered stimuli had a frequency of 25 Hz, amplitude of 300 µm in the at home testing batter, 350 µm for the testing in the lab, a duration of 40 ms and an ITI of 5000 ms. The two stimuli were initially delivered with an ISI of 150 ms. The ISI was decreased for the following trial by 15% for correct trials and increased by 15% for incorrect trials (i.e., 1 up/1 down ladder throughout). The TOJ threshold was determined as the mean ISI of the last five trials.

Duration Discrimination (DD)

Two sequential vibrations of different lengths were delivered to each digit and participants were asked to select which finger had the longer stimulus by using the mouse to select the appropriate button on the screen. There were a total of 20 trials, with an ITI of 5000 ms. Within each trial, the stimuli were delivered at an ISI of 500 ms, frequency of 40 Hz and amplitude of 300 µm. A reference stimulus of 750 ms and initial comparison stimulus of 500 ms were randomly delivered. For the first 10 trials, the difference in duration between the comparison stimulus and the reference stimulus decreased for correct trials and increased for incorrect trials by 25 ms (i.e., 1 up/1 down ladder) and for the last 10 trials, a 1 up/2 down ladder was used. The duration discrimination threshold was calculated as the mean of the last five trials.

Table 1 and Figure 2 present trial information and visual information, respectively, about these tasks.

Trials (n) ITI (ms) ISI (ms) Duration of Stimulus (ms) Frequency (Hz) Standard Amplitude (µm) Test Amplitude (µm) Standard Stimulus (ms) Test Stimulus (ms)
Simple Reaction Time (SRT) 10 4000-7000 40 25 300
Sequential Amplitude Discrimination ( SeqAD ) 20 5000 500 500 25 200 400
Simultaneous Amplitude Discrimination ( SimAD ) 20 5000 500 25 200 400
Temporal Order Judgement (TOJ) 20 5000 150 40 25 300/350* 0
Duration Discrimination (DD) 20 5000 500 40 300 750 500
Table 1.Task details *Note: The amplitude for the at-home testing was 300 µ m and 350 µ m for the lab visit.

Figure 2.Visual representations of the tasks.(a) Simple Reaction Time (SRT), (b) Sequential Amplitude Discrimination (SeqAD) and Simultaneous Amplitude Discrimination (SimAD), (c) Temporal Order Judgement (TOJ), and (d) Duration Discrimination (DD). Participants were asked to place their left hand on the stimulator mouse. Stimulation was delivered to the index finger for SRT1 and SRT2 and both the index and middle finger for SeqAD, SimAD, TOJ, and DD. The red line represents the standard stimulus and the blue line represents the comparison stimulus.

Home Testing Procedure

Participants were asked to complete the five-task battery at home three times a week for one month starting at time of recruitment. Participants were provided with a one-page instruction sheet to take home which explained each task and device set-up including a unique personal identifier (Appendix 1). Google Chrome was used to access the testing interface.

Lab Visit Testing Procedure

After ~1 month, participants were asked to come in for a lab visit that included a similar vibrotactile testing battery. The testing set up at this visit was the same as that shown in Figure 1. A research assistant was present and provided verbal instructions for tasks. The Cortical Metrics testing application was used for the lab visit. The lab testing battery included the same five tasks as the home procedure plus the confound tasks for TOJ and DD. These were not included in the home battery because of confounds potentially confusing participants.

Statistical Procedures

Group demographics were compared with chi-square and t-tests. Paired t-tests between the most recent home session and lab session were conducted to measure reliability of at home testing results. Results were considered significant at p < .05.

Results

Participant Characteristics

A total of 19 participants were recruited in this study; 11 mTBI and 8 OI. Participants in the mTBI group were 12.8 ± 2.2 years old (36.4% female) and participants in the OI group were 11.6 ± 2.5 years old (57.1% female) at the time of injury (Table 2). The groups did not differ significantly in sex (Chi X2(1) = .748, p = .387) and age (t16= 1.104, p = .286).

Three participants did not perform any of the home testing collection (1 mTBI and 2 OI). Thus, ‘home’ data was available for ten mTBI and six OI (Table 2). Reported reasons for not completing tasks at home were needing to install Chrome or an inability to login.

mTBI OI Statistical Tests
Age at Injury (mean, SD) 12.8 (2.2) years 11.6 (2.5) years t16= 1.104, p = .286
Sex (Females) (n, %) 4 (36.4%) 4 (57.1%) 2(1) = .748, p = .387
Home Sessions
Total Participants (n) 10 (62.5%) 6 (37.5%)
Sex (Females) (n, %) 4 (40.0%) 3 (50.0%) 2(1) = .152, p = .696
Number of Sessions (mean, range) 5.8 (0-17) 2.0 (0-5)
Time Since Injury to Recent Home Session (range) 19-41 days 13-38 days
Lab Visit
Total Participants (n, %) 10 (55.6%) 8 (44.4%)
Sex (Females) (n, %) 4 (40.0%) 3 (37.5%) 2(1) = .486, p = .486
Time Since Injury (range) 16-29 days 29-52 days
Table 2.Demographic characteristics*Note: There is one OI participant for whom age and sex information was not available.

*Note: There is one OI participant for whom age and sex information was not available.

From the 19 participants recruited to this study, 18 participants attended lab visits. One mTBI participant did not schedule their lab visit, as such data was available for ten mTBI (40.0% female) and eight OI controls (37.5% female). Because the intention of this study is to report the development of at home testing, comparisons between mTBI and OI were made to investigate testing practices, not to investigate mTBI physiology.

Comparison of final home session to lab visit data

The first objective of this paper is to demonstrate that a testing battery for use at home is comparable to lab data. On average, participants in the mTBI group came in for their lab visit 3.9 (3.4) days after completing their final testing session at home and the OI group came in for their lab visit 9.0 (9.1) days after completing their final at home testing.

Results from paired sample t-tests comparing task performance did not detect significant differences between the data collected from the home session and at the lab visit for the mTBI group. However, participants in the OI group performed significantly worse on the DD task at the lab visit. Table 3 and Figure 3 present the results of paired samples t-tests and the individual data between these timepoints, respectively.

Task Recent Home Session Lab Session Paired Samples t-test
mTBI
SRT1 244.3 ms (39.2 ms) 249.3 ms (29.3 ms) t(7)= -0.475, p = 0.649
SRT1 Variability 29.7 ms (16.6 ms) 20.7 ms (7.5 ms) t(7)= 1.764, p = 0.121
SeqAD 74.3 µm (31.6 µm) 59.0 µm (38.4 µm) t(7)= 1.178, p = 0.277
SimAD 86.1 µm (41.2 µm) 116.0 µm (57.8 µm) t(7)= -1.527, p = 0.171
TOJ 60.7 ms (33.2 ms) 48.6 ms (26.4 ms) t(7)= 0.830, p = 0.434
DD 97.5 ms (50.0 ms) 130.0 ms (72.2 ms) t(7)= -1.328, p = 0.226
SRT2 281.3 ms (93.4 ms) 284.3 ms (47.0 ms) t(7)= -0.100, p = 0.923
SRT2 Variability 38.2 ms (22.9 ms) 40.0 ms (25.5 ms) t(7)= -0.363, p = 0.727
SRT Difference 42.5 ms (104.1 ms) 28.9 ms (31.0 ms) t(7)= 0.344, p = 0.741
OI
SRT1 318.7 ms (75.0 ms) 257.9 ms (54.0 ms) t(4)= 2.243, p = 0.088
SRT1 Variability 25.2 ms (16.9 ms) 25.8 ms (13.9 ms) t(4)= -0.091, p = 0.932
SeqAD 75.0 µm (42.0 µm) 49.7 µm (32.2 µm) t(5)= 2.062, p = 0.094
SimAD 116.0 µm (53.8 µm) 92.3 µm (44.5 µm) t(5)= 1.438, p = 0.210
TOJ 47.1 ms (28.9 ms) 49.6 ms (43.1 ms) t(5)= -0.161, p = 0.879
DD 88.3 ms (82.6 ms) 171.7 ms (80.8 ms) t( 5) = - 4.469, p = 0 .007
SRT2 313.0 ms (65.7 ms) 274.7 ms (82.4 ms) t(5)= 1.144, p = 0.304
SRT2 Variability 28.3 ms (7.8 ms) 27.3 ms (23.9 ms) t(5)= 0.090, p = 0.932
SRT Difference -7.2 ms (18.0ms) -12.9 ms (12.2 ms) t(4)= -1.174, p = 0.305
Table 3.Performance thresholds for each tasks (mean and SD)

Individual results between the at home testing and lab visit are viusalized in Figure 3 a-f below, with mTBI shown in blue and OI shown in pink. Each line represents one participant.

Overall performance is comparable between sessions (Table 3) with the exception of OI DD.

Figure 3.Individual participant data from the last home session and the lab session.Each individual is represented by a line connecting their behavioral performance between the two sessions. mTBI is shown in pink and OI are in blue. (a) Simple Reaction Time 1 (SRT1), (b) Sequential Amplitude Discrimination (SeqAD), (c) Simultaneous Amplitude Discrimination (SimAD), (d) Temporal Order Judgement (TOJ), (e) Duration Discrimination (DD) and (f) Simple Reaction Time 2 (SRT2).

At-home sensory metrics

The second objective of this paper is to present feasibility of longitudinal assessment. Fifteen participants performed the at home testing; they completed 1-17 sessions at home (Table 2). Overall, the mTBI group performed more home assessments. In this feasibility assessment, we aimed to (a) examine the quality of these results, (b) determine approach feasibility and (c) highlight challenges of this approach.

As a quality check, we visually inspected the data for each testing session to assess participant convergence on a threshold for each task. If participants did not converge on a threshold, this suggests the data is unreliable and not their true threshold. Table 4 presents mean convergence percentages for all at home testing completed by all participants in each group. As the majority of tests did converge, this shows participants generally performed well and were able to complete the testing at home. Below we highlight individual cases of good and bad performance from the longitudinal home data for SRT1 variability, SeqAD and TOJ. We chose to highlight these three tasks as they highlight different aspects of testing and brain metrics.

Task Covergence Percentage (%)
mTBI
SeqAD 94.4%
SimAD 100.0%
TOJ 88.6%
DD 95.4%
OI
SeqAD 85.7%
SimAD 100.0%
TOJ 87.5%
DD 95.8%
Table 4.Convergence percentages (mean)

Figure 4.SRT1 VariabilitySRT1 variability is presented for two mTBI participants. The participant in (a) began testing on day 15 post-injury and performed 8 at home tests. The participant in (b) began testing on day 7 post-injury and performed 16 at home tests.

SRT1 Variability

In Figure 4a, the participant shows a small reduction in reaction time variability over time, potentially indicating increased compliance and/or an increase in performance with recovery. By contrast, the inconsistent variability of the second participant (Figure 4b) may indicate unreliable data, possibly due to distractions while the participant was testing, others performing the testing in place of the subject or the participant is experiencing post-concussive symptomology that is variable over time [20].

Figure 5.SeqADSeqAD is presented for two mTBI participants. The participant in (a) began testing on day 8 post-injury and performed 11 at home tests. The participant in (b) began testing day 7 post-injury and performed 15 at home tests.

SeqAD

Figure 5a presents a participant with performance that is variable but over all appears to improve over the 2.5 week testing period. The variability in response may be driven by post-concussive symptoms or may be due to environmental factors (e.g., testing with others around). By contrast, the participant in Figure 5b shows large fluctuations in the determined amplitude discrimination threshold, which may indicate unreliable data. Upon analyzing individual data, this participant only showed performance convergence on 73.3% of the at home testing.

Figure 6.TOJTOJ is presented for two mTBI participants. The participant in (a) began testing on day 15 post-injury and performed 8 at home tests. The participant in (b) began testing on day 7 post-injury and performed 15 at home tests. Red circles indicate sessions where the participant did not converge on a threshold.

TOJ

The participant in 6a shows relatively consistent performance over time whereas the participant in Figure 6b shows large fluctuations across testing sessions. Upon analyzing individual data, the participant in Figure 6b converged on only 60% of the TOJ at-home tasks. The testing sessions in which the thresholds did not converge (highlighted in Figure 6b) would typically not be included in data analysis.

Discussion

This study presents a protocol for vibrotactile testing that can be given to participants for unsupervised testing at home. The testing battery included five tasks that took approximately 15 minutes to complete. We show that it is feasible to send a portable tactile stimulator home with participants and acquire quantative, objective data without requiring a site visit or a trained researcher to administer the protocol. The only requirements were that participants have a computer with a USB port and an internet connection.

Our aim was to implement a testing protocol that was easy for remote testing with very little instruction. However, even with this easy protocol we suggest compliance can remain an issue. Figure 4b, Figure 5b and Figure 6b show variable reaction times and discrimination thresholds within a subject across days which may indicate non-compliance and thus unreliable data. It could also be indicative of post-concussive symptomology (e.g., fluctuations in difficulty concentrating, headache) [20] as one participant did indicate looking at a screen aggrevated their symptoms. To investigate data reliablility, we examined individual trial data to investigate whether performance converaged across each task and testing session. For participants that showed convergence during individual testing sessions, we suggest varaiable results across time are more likely to reflect post-concussive symptomology. However, for participants that seldom showed convergence around a threshold during a single test, we suggest that compliance was an issue. In addition, it could also indicate that single testing may not accurately capture threshold values for these tasks, therefore repeated testing may be required to gain a holisitic view of participant performance.

We compared the last at home data with the data acquired in the lab. Overall, there was no systematic bias between at-home and lab testing, which we interpret to indicate home testing is valid. The exception was DD, which showed decreased performace at the lab visit, though this was only significant in the OI group (Table 3). Visual inspection of Figure 3 shows high overall variability in DD suggesting participants may have been tired or less focussed in anticipation of completing the testing as this task was closer to the end of the testing protocol.

While non-significant, the trend of improved performance seen between the last at home testing to the lab testing in several tasks may be function of recovery, the more controlled testing environment or increased familiarity with the tasks. Alternatively, a deterioration in performance during the lab visit relative to their final home session may have been due to less familiar lab environment, decreased engagement due to familiarity with testing, or general fatigue as the testing was a scheduled lab visit with multiple components, rather than at the participant’s convenience.

We also present longitudinal data from participants performing multiple home testing sessions and show the potential to detect subtle changes in performance over the course of the 1 month testing period. Importantly, the results demonstrated that multiple assessments can be reliably obtained from participants with a home testing protocol. In addition, repeat testing may enable more accurate data collection for the threshold values of vibrotactile tasks.

Though this current study was under-powered to detect group differences, its data can be used for preliminary sample size calculations (performed here in G*Power) [21]. On the RT task, there is a group difference of 75 ms, suggesting a moderate effect size of 0.5 could be expected and a sample size of 64 per group would be required for 80% power. On the SeqAD task, there is a group difference of 20 µm, suggesting an effect size of 0.5 could be expected and a sample size of 60 per group would be required. Group differences with these same parameters can be expected for tasks such as SimAD or DD.

Recommendations

While our data collection was largely successful, we have several recommendations for future studies.

We recommend a tailored approach incorporating a combination of more explicit instructions along with reward incentives to increase engagement, particularly for this age group. Participants were requested to complete this battery three times a week, with the assumption that they would at least complete it twice a week. While participants were able to perform the testing, in many cases they did not attempt multiple testing sessions. Providing interium incentives as well as upon study completion may assist with this challenge. In addition, an ongoing check for who is actually completing these tests may be necessary. Regular check ins or questionnaires to be completed with testing may also assist to identify the nature of suspected poor compliance; for example if the testing itself is affecting symptoms, whether multiple people access the testing etc. These regular communications may also be used for troubleshooting issues that result in an unwillingness to participate. While we did not receive any explicit feedback on this issue, it cannot be excluded. A brief survery to be completed with the tactile testing to assess for tiredness, engagement, focus, and relevant symptoms may assist with data interpretation, though it would increase the testing time.

We did not perform a training session before sending participants home with the equipment but suggest this may be appropriate to ensure they understand the instructions. Furthermore, this would also enable the use of longer, more sophisticated protocols. In our home testing battery, we did not perform any testing with confounds or priming stimuli for the explicit purpose to ensure tasks were easily understood without someone administering any tests. An initial training could enable these approaches. Training could also confirm participants understand the testing which could eliminate the need for the repeated practice sessions, which may increase compliance. Overall, compliance is a challenge, even in the lab. Our detailed analysis that includes looking at the convergence of thresholds across the ladder assists in data quality assurance but cannot recover data lost to poor task engagement and thus we suggest researchers carefully consider how to maintain research engagement and ensure data quality.

Conclusion

Vibrotactile sensory testing provides a non-invasive, objective measure of CNS functioning without relying on subjective questionnaires [10,12,17]. This work demonstates it is possible to perform this testing remotely. Our data with children and adolescents demonstrates they are capable of completing these tasks at home; we therefore expect this at home testing protocol could easily be administered in an adult population. While we used mTBI/concussion as a model, there are many conditions that repeated assessments would be relevant (e.g., migraine).

Author Disclosure Statement

M.T. is co-founder of Cortical Metrics, L.L.C., which makes the device used for tactile sensory testing in this study.

Acknowledgments

Study support was provided by the Canada Foundation for Innovation John R Evans Leaders Award, a SickKids CIHR IHDCYH New Investigator Grant, the Canadian Institutes for Health Research, and the Alberta Children’s Hospital Research Institute and the Hotchkiss Brain Institute, University of Calgary.

References

  1. Puts Nicolaas A. J., Wodka Ericka L., Tommerdahl Mark, Mostofsky Stewart H., Edden Richard A. E.. Impaired tactile processing in children with autism spectrum disorder. Journal of Neurophysiology. 2014; 111(9)DOI
  2. Puts Nicolaas A. J., Harris Ashley D., Mikkelsen Mark, Tommerdahl Mark, Edden Richard A. E., Mostofsky Stewart H.. Altered tactile sensitivity in children with attention-deficit hyperactivity disorder. Journal of Neurophysiology. 2017; 118(5)DOI
  3. Adams Meaghan S., Niechwiej-Szwedo Ewa, McIlroy William E., Staines William R.. A History of Concussion Affects Relevancy-Based Modulation of Cortical Responses to Tactile Stimuli. Frontiers in Integrative Neuroscience. 2020; 14DOI
  4. Favorov Oleg V, Francisco Eric, Holden Jameson, Kursun Olcay, Zai Laila, Tommerdahl Mark. Quantification of Mild Traumatic Brain Injury via Cortical Metrics: Analytical Methods. Military Medicine. 2019; 184(Supplement_1)DOI
  5. Francisco Eric, Favorov Oleg, Tommerdahl Anna, Holden Jameson, Tommerdahl Mark. Sensory testing as a predictor of short vs. long term trajectory of recovery from concussion. The Journal of Science and Medicine. 2020; 2(4)DOI
  6. Ketcham Caroline J., Hall Eric, Bixby Walter R., Vallabhajosula Srikant, Folger Stephen E., Kostek Matthew C., Miller Paul C., Barnes Kenneth P., Patel Kirtida. A Neuroscientific Approach to the Examination of Concussions in Student-athletes. Journal of Visualized Experiments. 2014; 94DOI
  7. Pearce Alan J., Tommerdahl Mark, King Doug A.. Neurophysiological abnormalities in individuals with persistent post-concussion symptoms. Neuroscience. 2019; 408DOI
  8. Pearce Alan J., Kidgell Dawson J., Frazer Ashlyn K., King Doug A., Buckland Michael E., Tommerdahl Mark. Corticomotor correlates of somatosensory reaction time and variability in individuals with post concussion symptoms. Somatosensory & Motor Research. 2019; 37(1)DOI
  9. Pearce Alan J., Kidgell Dawson J., Tommerdahl Mark A., Frazer Ashlyn K., Rist Billymo, Mobbs Rowena, Batchelor Jennifer, Buckland Michael E.. Chronic Neurophysiological Effects of Repeated Head Trauma in Retired Australian Male Sport Athletes. Frontiers in Neurology. 2021; 12DOI
  10. Tommerdahl Mark, Dennis Robert G., Francisco Eric M., Holden Jameson K., Nguyen Richard, Favorov Oleg V.. Neurosensory Assessments of Concussion. Military Medicine. 2016; 181(5S)DOI
  11. Tommerdahl Mark, Francisco Eric, Holden Jameson, Lensch Rachel, Tommerdahl Anna, Kirsch Bryan, Dennis Robert, Favorov Oleg. An Accurate Measure of Reaction Time can Provide Objective Metrics of Concussion. The Journal of Science and Medicine. 2020; 2(2)DOI
  12. Tommerdahl Mark, Favarov Oleg, Wagner Christina D, Walilko Timothy J, Zai Laila, Bentley Timothy B. Evaluation of a Field-Ready Neurofunctional Assessment Tool for Use in a Military Environment. Military Medicine. 2021. DOI
  13. Güçlü Burak, Tanıdır Canan, Çanayaz Emre, Güner Bora, İpek Toz Hamiyet, Üneri Özden Ş., Tommerdahl Mark. Tactile processing in children and adolescents with obsessive–compulsive disorder. Somatosensory & Motor Research. 2015; 32(3)DOI
  14. Maeda Yumi, Kim Hyungjun, Kettner Norman, Kim Jieun, Cina Stephen, Malatesta Cristina, Gerber Jessica, McManus Claire, Ong-Sutherland Rebecca, Mezzacappa Pia, Libby Alexandra, Mawla Ishtiaq, Morse Leslie R., Kaptchuk Ted J., Audette Joseph, Napadow Vitaly. Rewiring the primary somatosensory cortex in carpal tunnel syndrome with acupuncture. Brain. 2017; 140(4)DOI
  15. Tommerdahl Mark, Lensch Rachel, Francisco Eric, Holden Jameson, Favorov Oleg. The Brain Gauge: a novel tool for assessing brain health. The Journal of Science and Medicine. 2019; 1(1)DOI
  16. Kaur S, Espenhahn E, Bell T, Godfrey K, Nwaroh C, Giuffre A, Cole L, Beltrano W, Yan T, Stokoe M, Haynes L, Hou T, Tommerdahl M, Bray S, Harris AD. Non-linear age effects in tactile perception from early childhood to adulthood. PsyArXiv.DOI
  17. Puts Nicolaas A.J., Edden Richard A.E., Wodka Ericka L., Mostofsky Stewart H., Tommerdahl Mark. A vibrotactile behavioral battery for investigating somatosensory processing in children and adults. Journal of Neuroscience Methods. 2013; 218(1)DOI
  18. Mikkelsen Mark, He Jason, Tommerdahl Mark, Edden Richard A. E., Mostofsky Stewart H., Puts Nicolaas A. J.. Reproducibility of flutter-range vibrotactile detection and discrimination thresholds. Scientific Reports. 2020; 10(1)DOI
  19. Yeates Keith Owen, Beauchamp Miriam, Craig William, Doan Quynh, Zemek Roger, Bjornson Bruce, Gravel Jocelyn, Mikrogianakis Angelo, Goodyear Bradley, Abdeen Nishard, Beaulieu Christian, Dehaes Mathieu, Deschenes Sylvain, Harris Ashley, Lebel Catherine, Lamont Ryan, Williamson Tyler, Barlow Karen Maria, Bernier Francois, Brooks Brian L, Emery Carolyn, Freedman Stephen B, Kowalski Kristina, Mrklas Kelly, Tomfohr-Madsen Lianne, Schneider Kathryn J. Advancing Concussion Assessment in Pediatrics (A-CAP): a prospective, concurrent cohort, longitudinal study of mild traumatic brain injury in children: protocol study. BMJ Open. 2017; 7(7)DOI
  20. Thomas Donald J., Coxe Kathryn, Li Hongmei, Pommering Thomas L., Young Julie A., Smith Gary A., Yang Jingzhen. Length of Recovery From Sports-Related Concussions in Pediatric Patients Treated at Concussion Clinics. Clinical Journal of Sport Medicine. 2018; 28(1)DOI
  21. Faul Franz, Erdfelder Edgar, Lang Albert-Georg, Buchner Axel. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods. 2007; 39(2)DOI

Most read articles by the same author(s)