The General Population’s Perspectives on Implementation of Artificial Intelligence in Radiology in the Western Region of Saudi Arabia

Background Artificial intelligence (AI) is a broad spectrum of computer-executed operations that mimics the human intellect. It is expected to improve healthcare practice in general and radiology in particular by enhancing image acquisition, image analysis, and processing speed. Despite the rapid development of AI systems, successful application in radiology requires analysis of social factors such as the public’s perspectives toward the technology. Objectives The current study aims to investigate the general population’s perspectives on AI implementation in radiology in the Western region of Saudi Arabia. Methods A cross-sectional study was conducted from November 2022 and July 2023 utilizing a self-administrative online survey distributed via social media platforms. A convenience sampling technique was used to recruit the study participants. After obtaining Institutional Review Board approval, data were collected from citizens and residents of the western region of Saudi Arabia aged 18 years or older. Results A total of 1,024 participants were included in the present study, with the mean age of respondents being 29.6 ± 11.3. Of them, 49.9% (511) were men, and 50.1% (513) were women. The comprehensive mean score of the first four domains among our participants was 3.93 out of 5.00. Higher mean scores suggest being more negative regarding AI in radiology, except for the fifth domain. Respondents had less trust in AI utilization in radiology, as evidenced by their overall distrust and accountability domain mean score of 3.52 out of 5. The majority of respondents agreed that it is essential to understand every step of the diagnostic process, and the mean score for the procedural knowledge domain was 4.34 out of 5. The mean score for the personal interaction domain was 4.31 out of 5, indicating that the participants agreed on the value of direct communication between the patient and the radiologist for discussing test results and asking questions. Our data show that people think AI is more effective than human doctors in making accurate diagnoses and decreasing patient wait times, with an overall mean score of the efficiency domain of 3.56 out of 5. Finally, the fifth domain, “being informed,” had a mean score of 3.91 out of 5. Conclusion The application of AI in radiologic assessment and interpretation is generally viewed negatively. Even though people think AI is more efficient and accurate at diagnosing than humans, they still think that computers will never be able to match a specialist doctor’s years of training.


Introduction
In 1956, John McCarthy, a professor of computer science, was the first to coin the term "artificial intelligence" (AI) [1]. Today, AI is a generic term used to describe the capabilities and functions of machines (computers) that mimic or emulate human intelligence [2].
AI's technique is deep learning based on a structure of neural networks broadly inspired by the human brain 1 2 2 2 2 2 2 2 2 [3]. The field of radiology is a strong candidate for the early integration of AI into the healthcare system since AI has recently shown rapidly improved performance and is moving quickly into the implementation phase in many fields [4]. Furthermore, AI is expected to significantly enhance the quality, value, and breadth of radiology's contribution to patient care [4].
Although medical imaging has become an essential component in the decision-making process of patient care and despite the availability of technological tools, radiologists nevertheless make mistakes in imaging interpretation that may have serious consequences for the patient [5]. A previous study conducted in the United States showed 1,269 errors among 656 cases, with an average of 251 days from the first misinterpretation to the correct diagnosis [5]. The efficiency, errors, and manual radiologists' intervention would all be improved by an AI-integrated imaging workflow; therefore, significant efforts and policies have been developed to facilitate the use of AI-related technology in medical imaging [3]. Despite the rapid advancement of AI systems, radiologists must analyze social factors, such as the public's perspectives on the technology, to successfully apply AI in their field [6].
Thus far, there has been little engagement with patients affected by the application of AI in healthcare, which is alarming because patient concerns about AI may be a significant obstacle to adopting these tools [7]. A recent study in the Netherlands found that 77.8% of 922 participants agreed that a human check was necessary [8]. In addition, a second study of 229 patients in Germany found that 96.2% of patients preferred the physician's opinion to AI, and physician-supervised AI was deemed to be more acceptable than AI that was not supervised by a physician [9].
To date, no study has measured the general population's views on the implementation of AI in radiology in Saudi Arabia. To address this issue, we conducted an electronic survey to investigate the general population's perspectives on AI implementation in radiology in the western region of Saudi Arabia.

Study design and participants
This cross-sectional study investigated the general population's perspectives on AI implementation in radiology in the Western region of Saudi Arabia. Convenience sampling was used to recruit the participants. All citizens and residents currently living in the Western region of Saudi Arabia, aged 18 and older, were included in this study. Participants who refused to participate in the study or did not meet the eligibility requirements were excluded.

Ethical considerations and sample size
The study was conducted between November 2022 and July 2023. After obtaining IRB approval from Umm Al-Qura University's Biomedical Ethics Committee (Approval No. HAPO-02-K-012-2022-11-1302), the data were obtained via an online self-administered questionnaire in the Arabic language designed by Google Forms and distributed among the general population.
The sample size was calculated using the OpenEpi calculator, and 385 participants were considered the appropriate sample size [10]. The overall sample size was increased to a maximum of 1,000 participants in case of potential data loss and to generalize the study results more efficiently.

Study tool and scoring
The questionnaire form was divided into two different parts: the first part gathered participants' demographic data, including five questions (age, gender, education, specialty or career, and previous radiological imaging), and the second part assessed participants' attitudes toward AI implementation in radiology (composed of 49 attitudinal questions among five key domains: distrust and accountability [ Higher scores suggested being more negative regarding AI in radiology except for being informed domain since items' scores in this domain do not reflect a positive or negative attitude regarding AI in radiology. The questionnaire was quoted from a previously published study. An initial pilot study with 30 participants was conducted to assess the validity of the Arabic version of the questionnaire [6]. The data from the pilot study were excluded from the final dataset used in the study.

Statistical analysis
The data were collected, reviewed, and then fed to SPSS version 21 (IBM Corp., Armonk, NY). All statistical methods used were two-tailed with an alpha level of 0.05, which was considered significant if the p-value was less than or equal to 0.05. Regarding participants' perspectives toward AI and its role in radiology, the overall mean score was obtained for each domain, and the overall attitudinal mean score was calculated based on domains 1-4. Domain 5 was excluded from the calculation since it contains items that do not directly assess the direction of attitude toward AI in radiology. Descriptive analysis was done by prescribing frequency distribution and percentage for study variables, including participants' socio-demographic data, career specialty, and history of undergoing X-rays. Participants' perspectives toward AI in radiology were tabulated, and the overall attitudinal mean score for the domains was graphed. Cross-tabulation was used for showing the distribution of participants' overall attitude scores using mean with SD by their personal data and other factors using ANOVA and independent t-test.

Results
A total of 1,024 eligible participants completed the study questionnaire. Participants ages ranged from 18 to 70 years, with a mean age of 29.6 ± 11.3. Of the participants, 511 (49.9%) were male. Regarding educational level, 629 (61.4%) were university graduates, 311 (30.4%) had a secondary level of education, and 64 (6.3%) had a postgraduate degree. A total of 314 (30.7%) worked in the medical field, 108 (10.5%) were IT/computer science engineers, and 602 (58.8%) were in other specialties. A total of 765 (74.7%) reported that they had had an X-ray before ( Table 1).    A total of 91% of participants found it essential to have a good understanding of the results of a scan. Consistently, 89.6% found it essential to be able to ask questions personally about the results of a scan, 87.6% found it essential that a scan provide as much information about their body as possible, 87.3% found it necessary to talk with someone about the results of a scan, and 86.9% found it important to ask questions about the reliability of the results. About 78.9% of participants found it important to be well-informed about how a scan is made (   A total of 75.4% of participants agreed that evaluating scans with AI would reduce healthcare waiting times; 55.9% thought that the sooner they got the results, even when from a computer, the more they would be at ease. An exact 52.9% of participants reported that fewer doctors and radiologists are required because of AI, and 49.5% thought that humans make more errors than computers ( Table 5).   AI: Artificial intelligence. Figure 1 describes the means of the five domains and the overall mean of the first four domains. The highest score was for procedural knowledge (4.34 out of 5), followed by personal interaction (4.31 out of 5), while the lowest score was for AI efficiency (3.56 out of 5). The comprehensive mean score of the first four domains among our participants was 3.93 out of 5.00 (Figure 1). The overall attitudinal score was calculated based on factors 1-4; factor 5 was excluded from the calculation since it contains items that do not directly assess the direction of attitude toward AI in radiology.

Discussion
The use of AI in medical imaging could have various benefits, including improved efficiency and patient satisfaction with the diagnostic process, shorter wait times, and greater confidence in the diagnosis [11]. AI is expected to soon become more critical in radiology [3]. In the literature, patient attitudes toward AI in healthcare have been discussed; however, little attention has been paid to its application in radiology [6,12,13]. Our goal was to investigate the general population's expectations regarding the application of AI in radiology, which is essential for developing AI systems.
In accordance with what we expected based on the literature, our data suggest that people generally have negative attitudes toward the application of AI in radiology (overall score of 3.93) [6,7,14]. However, many studies have reported the opposite, in which people were generally positive about utilizing AI to evaluate their radiology reports. A potential reason for this is the difference in educational levels and technical affinity among the population [15,16].
With an average score of 3.52 for the first domain (distrust and accountability), our findings indicate that participants favor human judgment over that of AI systems and that people are somewhat against the idea of AI replacing radiologists' diagnostic decisions. These findings are consistent with what was found by Ongena YP et al. study, which reported a roughly similar score for this domain (3.28) [6]. This is not surprising, considering the multiple results of decreased trust in AI for medical practice and the need for physicians to control AI [14,16]. Regarding the second domain (procedural knowledge), respondents agreed with several studies that it is crucial to have a comprehensive understanding of the entire diagnostic process, including how and what data are obtained and processed, scoring an average of 4.34 [6,15]. The third domain (personal interaction) received an average score of 4.31, almost identical to that of Ongena YP et al., suggesting that participants agreed on the importance of personal interaction between the patient and the radiologist for discussing examination results and asking questions [6]. For the fourth domain (efficiency), our data suggest that people believe AI to be more effective than human doctors in making accurate diagnoses and shortening patient wait times (average score of 3.56). This goes against the findings of Ongena YP et al., who showed that patients are on the fence about whether AI will enhance diagnostic workflow, giving an average score of 2.89 [6].
Scores on several items within the fifth domain (being informed) varied. People, for instance, prefer AI systems to examine the entire body rather than specific body parts (average score of 3.97), which is consistent with the findings of Ongena YP et al.'s study [6]. To a greater extent, participants in our sample would like to receive information from AI systems about potential future diseases (average score of 4). In contrast, respondents indicated that if computers provided them with results, they would feel a lack of emotional support (mean score: 3.91), which is consistent with the findings of several studies. This makes sense, given that effective patient-doctor communication and empathy are more important to patients than medical outcomes [6,15,17].

Limitations
By using a new tool, our study contributes to a better understanding of public attitudes toward AI in radiology. However, there are still a few limitations. First, electronic surveys were distributed via social media, which, in addition to affecting the credibility of the participants' responses, raises the likelihood of sampling bias. Second, despite the large research sample with an equal male-to-female ratio, the study only reflects a single area in Saudi Arabia, making it difficult to generalize the findings across the country.

Conclusions
The present study generally reported a negative attitude toward the application of AI in radiologic assessment and interpretation. Although our results reported that people believe AI to be more effective and accurate in diagnosing than humans, they still believe that computers can never compete against the experience of a specialized doctor. They favor human judgment over the AI system. Furthermore, the participants believe personal interaction between the patient and the radiologist is essential. Our findings also revealed the necessity of enhancing AI systems' clarity and providing better and simpler explanations to raise people's confidence and acceptance of AI utilization.

Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. The Biomedical Ethics Committee of the College of Medicine at Umm Al-Qura University, Makkah, Saudi Arabia issued approval HAPO-02-K-012-2022-11-1302. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.