Since the commercialization of the radiology business - radiologists have been increasingly asked to read more cases, work longer hours, and engage more with patients and referrers to remain profitable and relevant within the continuum of care. The unintended consequences of these new demands are adding cognitive stress to an already peaked-out cast. Despite the increased workload there has been little or no advancements in software, integration, and or tools to mitigate the additional risk for potential errors - leaving the radiologist with limited cognitive resources to perform diagnosis? Why has technology failed to play more of a role in supporting the new normal of higher throughput requirements whilst maintaining safety levels? Do they risk missing the gorilla?
Today, radiologists are required to rapidly evolve and develop new capacities to manage multiple task and applications such as, RIS, PACS, work-list, dictation, slow load times of current and prior cases, and finding and displaying related paperwork - all lacking seamless integration into a common controlled view. In addition to the cumbersome antiquated software titles adding to the workload, environmental distractions exist as well, such as, phones - (work and personal), instant messaging, crowded real-estate, and the overall levels of background noise in the workspace.
Although all of these distractions play a significant role in diminishing the radiologist's ability to focus on the task at hand “processing visual data” - I believe the biggest threat remains legacy software developed in void of cognitive science. You see, the additional cognitive load is parasitic to the available processing capacity of the radiologist and this can leave himher negatively impacted by depleting vital resources needed to make timely diagnosis safely.
How long must we wait for real machine learning and intelligence to find its way into our industry’s software as common practice?
Possibilities: Imagine today if your current solution had the ability to measure what cognitive load you were currently experiencing from both non-radiology derived workloads (as distractions) vs. those from (case complexity) - by simply integrating a person's pupil-dilation via pupil measurement technology - known as the science of “Pupillometry” with the hardware being available today by Tobii Eye Tracker. Pupillometry has been proven to determine one's cognitive availability and overall load status based on varying degrees of dilation.
Once we can integrate this science-technology into our applications we could then provide analysis and feedback loops to better utilize case allocation algorithms feeding the worklist via an A.I. engine. If concluded by the machine learning that you are unable to maintain the current workload using some algorithm i.e. [software inefficiency contributed cognitive load] + [environmental distractions] + [case difficulty] = (total cognitive load) the system could immediately adjust your workload and provide possible insights or tips to relax, and or take a break to recalibrate yourself to a safer reading state.
In addition, based on a person's unique reading style and or capabilities other types of safety measures and efficiencies can be discovered and applied.
For example: Case Complexity Ordering (COO), as the system learns your daily patterns it will determine your unique coefficient and will allocate your most difficult cases accordingly. If you are a morning person and demonstrate your strongest cognitive abilities in the AM your cases will start out more difficult and decrease throughout you day to align your workload with your cognitive ability at the time. Unless of course the A.I. Learned Personal Profile (AILPP) algorithm discovers that you are the type to gets warmed up as the day progresses - in that scenario it can adjust the workload to meet your peak performance times.
There is much we can do...
I believe the reason why features like these are not mainstream today is the result of shared culpability between medical software vendors and their gorilla blind customers. It has always been my experience that the purchasing decisions made for RISPACS etc. are made with a bias and don’t consider a scientific approach in their thought processes. Most likely the decision to purchase one package over another will be made using a familiarity bias, politics, price, and or some other meaningless metric.
We must do better. We must educate ourselves and demand more from all parties. Once the software companies realize we are making purchasing decisions based on the availability of advanced features - they will be forced to put in the work. Additionally, when consumers become truly educated about the value of each unique science (Anatomical, Cognitive, Visual, Pupil, etc.) and the role it can play in the value of the software through meaningful evidence based outcomes (individually, and collectively) as the “new product” improving time-to-cognition, reduced physical stress, and delivery of customized manageable cognitive-loads throughout one's work day - all while being measured and reported against in real-time and included and utilized in an iterative loop.
Unfortunately, until then I’m afraid we will not see the Gorillas in Our Midst.
Christopher Chabris is an Assistant Professor of Psychology at Union College. In 2004 he was the co-recipient of an Ig Nobel Prize for his now-landmark experiment "Gorillas in Our Midst," which demonstrated that when subjects focused their attention on one thing, they often failed to notice something as conspicuous as a woman in a gorilla suit. His new book "The Invisible Gorilla," based largely on that experiment and reactions to it, explores how the human mind is more fallible than we tend to believe. Chabris received a Ph.D. from Harvard in 1999.